1
votes

For convolutional networks, one can view the convolutional part (convolutional, max-pooling etc) as feature extraction which then gets feed into feedforward networks which does the classifying (more or less).

Is the same true for recurrent networks (RNN, LSTM etc), i.e. the recurrent layers creates a represenation of the data/features which then gets feed into a feed-forward layers?

I was think in terms of sentiment analysis, i.e. "sequence to one" model. Do you think that having one recurrent layer + one feed-forward layer would outperform only one recurrent layer network?

1

1 Answers

1
votes

Recurrent Layers are like Feed Forward Neural Networks with a Feedback Loop in it. They just pass on the useful information from the past to present.

A decent explanation is at: https://kevinzakka.github.io/2017/07/20/rnn/

And coming to Adding More layers to RNN you can find the details for that Deep RNNs in the https://arxiv.org/pdf/1312.6026.pdf

The paper says that Deep RNNs are better than conventional RNNs