I'm trying to train a LSTM model to predict the temperature.but the model only got trained in first epochs.
I got the usage and temperature of cpu from a server in about twenty hours as the dataset.I want to predict the temperature of cpu after 10m by using 10m's data before.so I reshape my dataset to (1301,10,2) as I have 1301 samples,10m timesteps and 2 features, then I divide it to 1201 and 100 as the train dataset and the validation dataset.
I check the dataset manually,so it should be right.
I creat the LSTM model as below
model = Sequential()
model.add(LSTM(10, activation="relu", input_shape=(train_x.shape[1], train_x.shape[2]),return_sequences=True))
model.add(Flatten())
model.add(Dense(1, activation="softmax"))
model.compile(loss='mean_absolute_error', optimizer='RMSprop')
and try to fit it
model.fit(train_x, train_y, epochs=50, batch_size=32, validation_data=(test_x, test_y), verbose=2)
I got the log like this:
Epoch 1/50
- 1s - loss: 0.8016 - val_loss: 0.8147
Epoch 2/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 3/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 4/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 5/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 6/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 7/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 8/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 9/50
- 0s - loss: 0.8016 - val_loss: 0.8147
The trainning time of each epoch is 0 expect the first epoch,and the loss never decrease.I tried changing the number of LSTM cells,loss function and optimizer,but it still don't work.