I've just started look into recurrent neural network. I found three sources of information on Elman's network(Elman 1991).
(Example and code) http://mnemstudio.org/neural-networks-elman.htm
(Paper) http://www.sysc.pdx.edu/classes/Werbos-Backpropagation%20through%20time.pdf
(Q&A) Elman and Jordan context values during training for neural network
According to the first resource, the weights from hidden to context / context to hidden layers are not updated.
From the second resource, it also set these updates to 0, which means it doesn't updates the weights.
But from the third resource on Stackoverflow, the user claimed that "The context neurons neuron values themselves are not updated as training progresses. The weights between them and the next layer ARE updated during training. "
I understand the context neuron saves the value of hidden neural at time t, and feed it (together with input neuron) to hidden neuron at t + 1. But do we have to update the weights in between?