3
votes

I am trying to understand the concept of LSTM layers in Keras. I just want to confirm some behavior of LSTM and check if i understand it correctly.

Assuming that I have 1000 samples and this samples have 1 time step and i have a batch size of 1 when

stateful = True

Is this the same as 1 sample with 1000 time steps and a batch size of 1 with

stateful = False

Here I am also assuming that in both cases i have the same information just in different shapes and i reset the state of my LSTM layer after every training epoch.

I also think that the batch size in the stateless case only matters for my training sequence, because if i set

stateful = False 

i can use input_shape instead of batch_input_shape. So my LSTM layer does not need a batch dimension only time steps and feature dimensions. is this correct?

i got this conclusions from:

https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1847

When does keras reset an LSTM state?

Understanding Keras LSTMs

And if i have a multi layer LSTM net if the first LSTM layer is stateful, all other layers should also be stateful right?

I hope somebody understands what i mean and can help me. If my questions are not understandable please tell me and i will update this post.

Thanks everybody.

1
Why have you tagged this both [stateless] and [stateful]?jhpratt
because i want to understand the Differentials of both casesD.Luipers

1 Answers

2
votes

stateful=True means that you keep the final state for every batch and pass it as initial state for the next batch. So yes, in this case it's the same if you have 1 batch of 1000 samples or 1000 batches of 1 sample.