I'm running a LSTM on fMRI data. Our input data comes in with input_shape (timesteps,features) =(495,359320) and label data (495,). I'm running into an issue with what the LSTM layer is outputting.
MAX_SLIDER_VALUE=127
EPOCHS=3
BATCH_SIZE=1
LOSS='mean_squared_error'
OPTIMIZER='RMSprop'
model=Sequential()
` model.add(LSTM(units=MAX_SLIDER_VALUE,` activation=keras.layers.LeakyReLU(alpha=.025),dropout=.08,input_shape=(495,359320)))
model.add(Dense(units=MAX_SLIDER_VALUE,activation='softmax'))``
model.compile(loss=LOSS,optimizer=OPTIMIZER, metrics=['acc','mae'])
model.fit(np.array(train_subset_nii),np.array(train_subset_labels),
epochs=EPOCHS,batch_size=BATCH_SIZE)
Checking the model's output layer using the pdb debugger shows the 0th layer output should be (127,) but I'm are getting a valueError where it is outputted as (495,).
model.layers[0].input_shape
(None, 495, 359320)
model.layers[0].output_shape
(None, 127)
model.layers[1].input_shape
(None, 127)
model.layers[1].output_shape
(None, 127)
ValueError: Error when checking target: expected dense_5 to have shape (127,) but got array with shape (495,)
Additional Note:
The code trains and runs if we change the output to match the number of time steps
MAX_SLIDER_VALUE=495
I'm trying to figure out whats causing the disparity between (127,) and (495,).