1
votes

I'm trying to do a multi-class classification on sequential data to learn what is the source of certain events based on the cumulative reading of the sources.

I'm using a simple LSTM layer with 64 units and a Dense layer with the same number of units as targets. The model does not seems to be learning anything as the accuracy remains about 1% all thought. def create_model(): model = Sequential()

model.add(LSTM(64, return_sequences=False))

model.add(Dense(8))
model.add(Activation("softmax"))

model.compile(
    loss="categorical_crossentropy",
    optimizer=Adam(lr=0.00001),
    metrics=["accuracy"],
)

return model

I have tried changing learning rate to very small values (0.001, 0.0001, 1e-5) and training for larger epochs but no change in accuracy observed. Am I missing something here? Is my data preprocessing not correct or the model creation is faulty?

Thanks in advance for your help.

Dataset


Accumulated- Source-1   Source-2    Source-3  
Reading   
217             0       0       0  
205             0       0       0  
206             0       0       0  
231             0       0       0  
308             0       0       1  
1548            0       0       1  
1547            0       0       1  
1530            0       0       1  
1545            0       0       1  
1544            0       0       1   
1527            0       0       1  
1533            0       0       1  
1527            0       0       1  
1527            0       0       1  
1534            0       0       1  
1520            0       0       1  
1524            0       0       1  
1523            0       0       1  
205             0       0       0  
209             0       0       0  
.  
.  
.  

I created a rolling window dataset having SEQ_LEN=5 to be fed to an LSTM network:


rolling_window                   labels
[205, 206, 217, 205, 206]       [0, 0, 0]
[206, 217, 205, 206, 231]       [0, 0, 0]
[217, 205, 206, 231, 308]       [0, 0, 1]
[205, 206, 231, 308, 1548]      [0, 0, 1]
[206, 231, 308, 1548, 1547]     [0, 0, 1]
[231, 308, 1548, 1547, 1530]    [0, 0, 1]
[308, 1548, 1547, 1530, 1545]   [0, 0, 1]
[1548, 1547, 1530, 1545, 1544]  [0, 0, 1]
[1547, 1530, 1545, 1544, 1527]  [0, 0, 1]
[1530, 1545, 1544, 1527, 1533]  [0, 0, 1]
[1545, 1544, 1527, 1533, 1527]  [0, 0, 1]
[1544, 1527, 1533, 1527, 1527]  [0, 0, 1]
[1527, 1533, 1527, 1527, 1534]  [0, 0, 1]
[1533, 1527, 1527, 1534, 1520]  [0, 0, 1]
[1527, 1527, 1534, 1520, 1524]  [0, 0, 1]
[1527, 1534, 1520, 1524, 1523]  [0, 0, 1]
[1534, 1520, 1524, 1523, 1520]  [0, 0, 1]
[1520, 1524, 1523, 1520, 205]   [0, 0, 0]
.  
.  
.

Reshaped dataset

X_train = train_df.rolling_window.values
X_train = X_train.reshape(X_train.shape[0], 1, SEQ_LEN)

Y_train = train_df.labels.values
Y_train = Y_train.reshape(Y_train.shape[0], 3)

Model

def create_model():
    model = Sequential()

    model.add(LSTM(64, input_shape=(1, SEQ_LEN), return_sequences=True))
    model.add(Activation("relu"))

    model.add(Flatten())
    model.add(Dense(3))
    model.add(Activation("softmax"))

    model.compile(
        loss="categorical_crossentropy", optimizer=Adam(lr=0.01), metrics=["accuracy"]
    )

    return model

Training

model = create_model()
model.fit(X_train, Y_train, batch_size=512, epochs=5)

Training Output

Epoch 1/5
878396/878396 [==============================] - 37s 42us/step - loss: 0.2586 - accuracy: 0.0173
Epoch 2/5
878396/878396 [==============================] - 36s 41us/step - loss: 0.2538 - accuracy: 0.0175
Epoch 3/5
878396/878396 [==============================] - 36s 41us/step - loss: 0.2538 - accuracy: 0.0176
Epoch 4/5
878396/878396 [==============================] - 37s 42us/step - loss: 0.2537 - accuracy: 0.0177
Epoch 5/5
878396/878396 [==============================] - 38s 43us/step - loss: 0.2995 - accuracy: 0.0174

[EDIT-1]
After trying Max's suggestions, here are the results (loss and accuracy are still not changing)

Suggested model

def create_model():
    model = Sequential()

    model.add(LSTM(64, return_sequences=False))

    model.add(Dense(8))
    model.add(Activation("softmax"))

    model.compile(
        loss="categorical_crossentropy",
        optimizer=Adam(lr=0.001),
        metrics=["accuracy"],
    )

    return model

X_train


array([[[205],
        [217],
        [209],
        [215],
        [206]],

       [[217],
        [209],
        [215],
        [206],
        [206]],

       [[209],
        [215],
        [206],
        [206],
        [211]],

       ...,

       [[175],
        [175],
        [173],
        [176],
        [174]],

       [[175],
        [173],
        [176],
        [174],
        [176]],

       [[173],
        [176],
        [174],
        [176],
        [173]]])

Y_train (P.S: There are 8 target classes actually. The above example was a simplification of the real problem)


array([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       ...,
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]])

Training-output

Epoch 1/5
878396/878396 [==============================] - 15s 17us/step - loss: 0.1329 - accuracy: 0.0190
Epoch 2/5
878396/878396 [==============================] - 15s 17us/step - loss: 0.1313 - accuracy: 0.0190
Epoch 3/5
878396/878396 [==============================] - 16s 18us/step - loss: 0.1293 - accuracy: 0.0190
Epoch 4/5
878396/878396 [==============================] - 16s 18us/step - loss: 0.1355 - accuracy: 0.0195
Epoch 5/5
878396/878396 [==============================] - 15s 18us/step - loss: 0.1315 - accuracy: 0.0236

[EDIT-2]
Based on Max and Marcin's suggestions below the accuracy is mostly remaining below 3%. Although 1 out of 10 times it hits 95% accuracy. It all depends on what the accuracy is at the beginning of the first epoch. If it doesn't start the gradient descent in the right place, it doesn't reach good accuracy. Do I need to use a different initializer? Changing the learning rate doesn't bring repeatable results.

Suggestions:
1. Scale/Normalize the X_train (done)
2. Not reshaping Y_train (done)
3. Having lesser units in LSTM layer (reduced from 64 to 16)
4. Have smaller batch_size (reduced from 512 to 64)

Scaled X_train

array([[[ 0.01060734],
        [ 0.03920736],
        [ 0.02014085],
        [ 0.03444091],
        [ 0.01299107]],

       [[ 0.03920728],
        [ 0.02014073],
        [ 0.03444082],
        [ 0.01299095],
        [ 0.01299107]],

       [[ 0.02014065],
        [ 0.0344407 ],
        [ 0.01299086],
        [ 0.01299095],
        [ 0.02490771]],

       ...,

       [[-0.06089251],
        [-0.06089243],
        [-0.06565897],
        [-0.05850889],
        [-0.06327543]],

       [[-0.06089251],
        [-0.06565908],
        [-0.05850898],
        [-0.06327555],
        [-0.05850878]],

       [[-0.06565916],
        [-0.0585091 ],
        [-0.06327564],
        [-0.05850889],
        [-0.06565876]]])

Non reshaped Y_train

array([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       ...,
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]])

Model with lesser LSTM units

def create_model():
    model = Sequential()

    model.add(LSTM(16, return_sequences=False))

    model.add(Dense(8))
    model.add(Activation("softmax"))

    model.compile(
        loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=["accuracy"]
    )

    return model

Training output

Epoch 1/5
878396/878396 [==============================] - 26s 30us/step - loss: 0.1325 - accuracy: 0.0190
Epoch 2/5
878396/878396 [==============================] - 26s 29us/step - loss: 0.1352 - accuracy: 0.0189
Epoch 3/5
878396/878396 [==============================] - 26s 30us/step - loss: 0.1353 - accuracy: 0.0192
Epoch 4/5
878396/878396 [==============================] - 26s 29us/step - loss: 0.1365 - accuracy: 0.0197
Epoch 5/5
878396/878396 [==============================] - 27s 31us/step - loss: 0.1378 - accuracy: 0.0201
1
Try to normalize your data. Feeding values like 170 to your network might cause a lot of problems.Marcin Możejko
Tried scaling, no change in accuracy. Please take a look at Edit-2 and let me know if it's an initialization of weights issue.RainChaser
What are the values of the input stands for? Did they have collinear relationship with the output? If not you may try to subtract each element with the mean as an absolute input as stated by MaxxxMrPHDxx

1 Answers

0
votes

The sequence should be the first dimension of the LSTM (2nd of the input array), i.e.:

Reshaped dataset

X_train = train_df.rolling_window.values
X_train = X_train.reshape(X_train.shape[0], SEQ_LEN, 1)

Y_train = train_df.labels.values
Y_train = Y_train.reshape(Y_train.shape[0], 3)

The input shape is not required for LSTM. LSTM has 'tanh' activation by default, which is usually a good option.

Model

def create_model():
    model = Sequential()

    model.add(LSTM(64, return_sequences=True))

    model.add(Flatten())
    model.add(Dense(3))
    model.add(Activation("softmax"))

    model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.01), metrics=["accuracy"])

    return model

Maybe it would be a better choice not to use a Flatten() layer but to use return_sequences=False for the LSTM. Just try.

Edit

Also try pre-processing in terms of feature scaling of the data. The data values seem to be quite large.