4
votes

I try to create a very simple neural network: one hidden layer, with 2 neurons. For some very simple data: only one feature.

import numpy as np
X=np.concatenate([np.linspace(0,10,100),np.linspace(11,20,100),np.linspace(21,30,100)])
y=np.concatenate([np.repeat(0,100),np.repeat(1,100),np.repeat(0,100)])

enter image description here

Here is the model

from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(2, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
model.fit(X, y, epochs=200)

In theory, this model should work. But even after 1000 epochs, the accuracy is still 0.667.

Epoch 999/1000
10/10 [==============================] - 0s 1ms/step - loss: 0.5567 - accuracy: 0.6667
Epoch 1000/1000
10/10 [==============================] - 0s 2ms/step - loss: 0.5566 - accuracy: 0.6667

I think that I did something wrong. Could you suggest some modification?

It seems that there are a lot of local minimums and the initialization can change the final model. It is the case when testing with the package nnet in R. I had to test lots of seeds, I found this model (among others).

enter image description here

And this is the structure that I wanted to create with keras: one hidden layer with 2 neurons. The activation function is sigmoid.

So I am wondering if keras has the same problem with initialization. With this package nnet in R, I thought that it is not a "perfect" package. And I thought keras would be more performant. If the initialization is important, does keras test different initialization? If not why ? Maybe because in general, with more data (and more features), it works better (without testing many initializations)?

For example, with kmeans, it seems that different initializations are tested.

1

1 Answers

3
votes

This question shows the importance of input data normalization for the neural networks. Without normalization, training neural networks is hard sometimes because the optimization might get stuck at some local minimums.

I want to start with the visualization of the dataset. The dataset is 1D and after it has been normalized with standard normalization it looks like the following.

X_original = np.concatenate([np.linspace(0, 10, 100), np.linspace(
11, 20, 100), np.linspace(21, 30, 100)])
X = (X_original - X_original.mean())/X_original.std()

y = np.concatenate(
        [np.repeat(0, 100), np.repeat(1, 100), np.repeat(0, 100)])
plt.figure()
plt.scatter(X, np.zeros(X.shape[0]), c=y)
plt.show()

enter image description here The best way to separate these data points into the respective classes is by drawing two lines on the input space. Since the input space is 1D, the classification boundaries are just 1D points.

This implies that a single layer networks such as logistic regression cannot classify this dataset. But a neural network with two layers followed by nonlinear activation should be able to classify the dataset.

Now with the normalization, and the following training script the model can easily learn to classify the points.

model = Sequential()
model.add(Dense(2, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
                  optimizer=keras.optimizers.Adam(1e-1), metrics=['accuracy'])
model.fit(X, y, epochs=20)

Train on 300 samples
Epoch 1/20
300/300 [==============================] - 1s 2ms/sample - loss: 0.6455 - accuracy: 0.6467
Epoch 2/20
300/300 [==============================] - 0s 79us/sample - loss: 0.6493 - accuracy: 0.6667
Epoch 3/20
300/300 [==============================] - 0s 85us/sample - loss: 0.6397 - accuracy: 0.6667
Epoch 4/20
300/300 [==============================] - 0s 100us/sample - loss: 0.6362 - accuracy: 0.6667
Epoch 5/20
300/300 [==============================] - 0s 115us/sample - loss: 0.6342 - accuracy: 0.6667
Epoch 6/20
300/300 [==============================] - 0s 96us/sample - loss: 0.6317 - accuracy: 0.6667
Epoch 7/20
300/300 [==============================] - 0s 93us/sample - loss: 0.6110 - accuracy: 0.6667
Epoch 8/20
300/300 [==============================] - 0s 110us/sample - loss: 0.5746 - accuracy: 0.6667
Epoch 9/20
300/300 [==============================] - 0s 142us/sample - loss: 0.5103 - accuracy: 0.6900
Epoch 10/20
300/300 [==============================] - 0s 124us/sample - loss: 0.4207 - accuracy: 0.9367
Epoch 11/20
300/300 [==============================] - 0s 124us/sample - loss: 0.3283 - accuracy: 0.9833
Epoch 12/20
300/300 [==============================] - 0s 124us/sample - loss: 0.2553 - accuracy: 0.9800
Epoch 13/20
300/300 [==============================] - 0s 138us/sample - loss: 0.2030 - accuracy: 1.0000
Epoch 14/20
300/300 [==============================] - 0s 124us/sample - loss: 0.1624 - accuracy: 1.0000
Epoch 15/20
300/300 [==============================] - 0s 150us/sample - loss: 0.1375 - accuracy: 1.0000
Epoch 16/20
300/300 [==============================] - 0s 122us/sample - loss: 0.1161 - accuracy: 1.0000
Epoch 17/20
300/300 [==============================] - 0s 115us/sample - loss: 0.1025 - accuracy: 1.0000
Epoch 18/20
300/300 [==============================] - 0s 126us/sample - loss: 0.0893 - accuracy: 1.0000
Epoch 19/20
300/300 [==============================] - 0s 121us/sample - loss: 0.0804 - accuracy: 1.0000
Epoch 20/20
300/300 [==============================] - 0s 132us/sample - loss: 0.0720 - accuracy: 1.0000

Since the model is very simple, the choice of learning rate and optimizer affects the speed of the learning. With SGD optimizer and learning rate 1e-1, the model might take a longer time to train than Adam optimizer with the same learning rate.