Spiking Neural Network from scratch achieves 8% accuracy. no backpropagation or SGD I created a genetic hyper parameter optimizer and it now, on average, can get 8% accuracy which is ~3% above chance Link to source code with a detailed video and markdown explanations in comment it also usually starts lower than 5% and slowly improves then eventually can start dipping below 5%, all of which leads me to believe there is glimmers of learning taking place. sometims it is stabel around 7-8-9% for a long time there is no backpropagation or SGD. it learns via STDP (spike timing dependent plasticity) and a reward mechanism each example is presented n many times (500 in this case) which produces spike train which leads to eligibility list, at the end of a turn based on if the answer was correct or not we adjust the weights using the reward as a multiplier spike timing keeps track of the sequence of neuron firings and which ones were more likely to have lead to the correct answer let me know what you think