2
votes

I am trying to train a LSTM to predict the next value in a time series given the last 8 values. The result is instead a network that gets progressively better at echoing the current value instead of the next value as its prediction.

this is a sample from my data
train_X
[[0.01549889 0.0200023 0.01537059 0.01064907 0.00771096 0.00352831 0.00363095 0.00413133]]

train_y
[0.00357963]

test_X
[[0.0275208 0.01929664 0.02047702 0.02625061 0.03220383 0.02612231 0.02551929 0.01510116]]

test_y
[0.01250945]

Here is my model

model = Sequential()
model.add(Activation('relu'))
model.add(LSTM(128, input_shape=(train_X.shape[1], train_X.shape[2])))
#model.add(Dropout(.2))
model.add(Dense(1, activation='relu'))
model.compile(loss='mse', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=100,  validation_data=(test_X, test_y), batch_size=10, verbose=2,
                    shuffle=False)

So what outputs is just a number that is incredibly close to the last number in the input array. This holds for model.predict() on either the test set or the train set. It appears that the model is training to be wrong. Thanks in advance for any help that could be provided.

1

1 Answers

0
votes

I solved this myself. I did so by making the RNN predict out more than one day into the future. This increased the loss and removed the local minima that I kept falling into. But it did make it start to train on actual forecasts.