1
votes

I have small neural network in Keras:

contextTrain, contextTest, utteranceTrain, utteranceTest = train_test_split(context, utterance, test_size=0.1, random_state=1)
model = Sequential()
model.add(LSTM(input_shape=contextTrain.shape[1:], return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.add(LSTM(return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.compile(loss="cosine_proximity", optimizer="adam", metrics=["accuracy"])
model.fit(contextTrain, utteranceTrain, epochs=5000, validation_data=(contextTest, utteranceTest), callbacks=[ModelCheckpoint("model{epoch:02d}.h5", monitor='val_acc', save_best_only=True, mode='max')])

Context and utterance are numpy arrays with shape e.g. (100, 15, 300). Input_shape of fisrt LSTM should be (15, 300).

I don't know what happened but suddenly it prints negative loss and val_loss during training. It used to be normally positive (around 0.18 and so).

Train on 90 samples, validate on 10 samples

Epoch 1/5000 90/90 [==============================] - 5s 52ms/step - loss: -0.4729 - acc: 0.0059 - val_loss: -0.4405 - val_acc: 0.0133

Epoch 2/5000 90/90 [==============================] - 2s 18ms/step - loss: -0.5091 - acc: 0.0089 - val_loss: -0.4658 - val_acc: 0.0133

Epoch 3/5000 90/90 [==============================] - 2s 18ms/step - loss: -0.5204 - acc: 0.0170 - val_loss: -0.4829 - val_acc: 0.0200

Epoch 4/5000 90/90 [==============================] - 2s 20ms/step - loss: -0.5296 - acc: 0.0244 - val_loss: -0.4949 - val_acc: 0.0333

Epoch 5/5000 90/90 [==============================] - 2s 20ms/step - loss: -0.5370 - acc: 0.0422 - val_loss: -0.5021 - val_acc: 0.0400

What does it mean? And what is the possible reason?

1

1 Answers

6
votes

Your loss function, cosine_proximity, can indeed take negative values; and according to Keras creator Francois Chollet, it will usually be negative (Github comment):

The loss is just a scalar that you are trying to minimize. It's not supposed to be positive! For instance a cosine proximity loss will usually be negative (trying to make proximity as high as possible by minimizing a negative scalar).

Here is another example using cosine proximity, where the values are negative, too.