1
votes

I am trying to set up an LSTM Autoencoder/Decoder for time series data and continually get Incompatible shapes error when trying to train the model. Following steps and using toy data from this example. See below code and results. Note Tensorflow version 2.3.0.

Create data. Put data into sequences to temporalize for LSTM in the form of (samples, timestamps, features).

timeseries = np.array([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
                       [0.1**3, 0.2**3, 0.3**3, 0.4**3, 0.5**3, 0.6**3, 0.7**3, 0.8**3, 0.9**3]]).transpose()

timeseries_df = pd.DataFrame(timeseries)

def create_sequenced_dataset(X, time_steps=10):
    Xs, ys = [], []  # start empty list
    for i in range(len(X) - time_steps):  # loop within range of data frame minus the time steps
        v = X.iloc[i:(i + time_steps)].values  # data from i to end of the time step
        Xs.append(v)
        ys.append(X.iloc[i + time_steps].values)

    return np.array(Xs), np.array(ys)  # convert lists into numpy arrays and return

X, y = create_sequenced_dataset(timeseries_df, time_steps=3)
timesteps = X.shape[1]
n_features = X.shape[2]

Create the LSTM model with autoencoder/decoder given by the Repeat Vector and attempt to train the model.

model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, n_features), return_sequences=False))
model.add(RepeatVector(timesteps))
model.add(LSTM(128, return_sequences=True))
model.add(TimeDistributed(Dense(n_features)))
model.compile(optimizer='adam', loss='mse')
model.summary()

model.fit(X, y, epochs=10, batch_size=4)

Consistently get error:

tensorflow.python.framework.errors_impl.InvalidArgumentError:  Incompatible shapes: [4,3,2] vs. [4,2]
     [[node gradient_tape/mean_squared_error/BroadcastGradientArgs (defined at <ipython-input-9-56896428cea9>:1) ]] [Op:__inference_train_function_10833]

X looks like:

array([[[0.1  , 0.001],
        [0.2  , 0.008],
        [0.3  , 0.027]],
       [[0.2  , 0.008],
        [0.3  , 0.027],
        [0.4  , 0.064]],
       [[0.3  , 0.027],
        [0.4  , 0.064],
        [0.5  , 0.125]],
       [[0.4  , 0.064],
        [0.5  , 0.125],
        [0.6  , 0.216]],
       [[0.5  , 0.125],
        [0.6  , 0.216],
        [0.7  , 0.343]],
       [[0.6  , 0.216],
        [0.7  , 0.343],
        [0.8  , 0.512]]])

y looks like:

array([[0.4  , 0.064],
       [0.5  , 0.125],
       [0.6  , 0.216],
       [0.7  , 0.343],
       [0.8  , 0.512],
       [0.9  , 0.729]])
2
Can you also print the shapes of X and y?inferno

2 Answers

0
votes

I hope the translator will correctly translate my idea. I also did not understand at first what the problem was, but then I read the definition of an autoencoder again. Since this is an autoencoder, we apply X to the input and output (y does not participate in the model in any way, since we are trying to determine the dependencies in the data X and then recreate them). Some have code on this topic (y = x.copy ()), while here it applies (model.fit (X, X, epochs = 300, batch_size = 5, verbose = 0)).

0
votes

As the message clearly says, it's the shape issue which you are passing to the model for fit.

From the above data which you have given X is having the shape of (6, 3, 2) and Y is having the shape of (6, 2) which is incompatible.

Below is the modified code with the same input as per the example you have taken with X and Y having a shape (6,3,2).

model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, n_features), return_sequences=False))
model.add(RepeatVector(timesteps))
model.add(LSTM(128, return_sequences=True))
model.add(TimeDistributed(Dense(n_features)))
model.compile(optimizer='adam', loss='mse')
model.summary()  

model.fit(X,Y, epochs=10, batch_size=4)

Result:

Epoch 1/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0069
Epoch 2/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0065
Epoch 3/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0065
Epoch 4/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0062
Epoch 5/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0059
Epoch 6/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0053
Epoch 7/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0048
Epoch 8/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0046
Epoch 9/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0044
Epoch 10/10
2/2 [==============================] - 0s 6ms/step - loss: 0.0043
<tensorflow.python.keras.callbacks.History at 0x7ff352f9ccf8>