0
votes

I am using bidirectional LSTM in the many-to-one setting (sentiment analysis task) with tflearn. I want to understand how does tflearn aggregate representations from the forward and backward LSTM layers before sending it to the softmax layer to get probabilistic output? For instance, in the following diagram, how are concat and aggregate layers usually implemented?

enter image description here

Is there any documentation available on this?

Thank you!

1

1 Answers

1
votes

When using tflearn's Bidirectional RNN, the output is concatenated, as you have shown in the above figure. So each of the output will be twice that of the input embedding size. The default setting only returns the last sequence output, so if you want to use the entire sequence, then you need to set return_seq=True and then pass the sequence to the aggregration layer like Merge.