I'm learning about artificial neural networks and have implemented a standard feed-forward net with a couple hidden layers. Now, I'm trying to understand how a recurrent neural network(RNN) works in practice, and am having trouble with how activation/propagation flows through the network.
In my feed-forward, the activation is a simple layer by layer firing of the neurons. In a recurrent net, the neurons connect back to previous layers and sometimes themselves, so the way to propagate the network must be different. Trouble is, I can't seem to find an explanation of exactly how the propagation happens.
How might it occur say for a network like this:
Input1 --->Neuron A1 ---------> Neuron B1 ---------------------> Output
^ ^ ^ |
| | --------
| |
Input2 --->Neuron A2 ---------> Neuron B2
I imagined it would be a rolling activation with a gradual die down as the neuron's thresholds decrease the neuron firing to 0, much like in biology, but it appears there is a much more computational efficient way to do this through derivatives?