3
votes

my input time series data is of the shape (nb_samples, 75, 32).
75 is the timesteps and 32 is the input dimension.

model = Sequential()
model.add(LSTM(4, input_shape=(75, 32)))
model.summary()

The LSTM weight vectors,[W_i, W_c, W_f, W_o] are all 32 dimensions, but the output is just a single value. the output shape of the above model is (1,4). But in LSTM the output is also a vector so should not it be (32,4) for many to one implementation as above? why is it giving a single value for multi-dimension input also?

1
What do you mean by that this vectors have 32 dimensions? It's not true. - Marcin Możejko

1 Answers

4
votes

As you can read in the Keras doc for reccurent layers

For an input of shape (nb_sample, timestep, input_dim), you have two possible outputs:

  • if you set return_sequence=True in your LSTM (which is not your case), you return every hidden state, so the intermediate steps when the LSTM 'reads' your sequence. You get an output of shape (nb_sample, timestep, output_dim).

  • if you set return_sequence=False (which is the default), it will only output the last state. So you will get an output of shape (nb_sample, output_dim).

So if you define your LSTM layer like this :

model.add(LSTM(4, return_sequence=True, input_shape=(75, 32)))

you will have an output of shape (None, 75, 4). If 32 is your time dimension, you will have to transpose the data before feeding it to the LSTM. The first dimension is the temporal one.

I hope this helps :)