2
votes

input data (xs) :

array([[ 0.28555165, -0.03237782,  0.28525293,  0.2898103 ,  0.03093571],
       [ 0.28951845, -0.03555493,  0.28561172,  0.29346927,  0.03171808],
       [ 0.28326774, -0.03258297,  0.27879436,  0.2804189 ,  0.03079463],
       [ 0.27617554, -0.03335768,  0.27927279,  0.28285823,  0.03015975],
       [ 0.29084073, -0.0308716 ,  0.28788416,  0.29102994,  0.03019182],
       [ 0.27353097, -0.03571149,  0.26874771,  0.27310096,  0.03021105],
       [ 0.26163049, -0.03528769,  0.25989708,  0.26688066,  0.0303842 ],
       [ 0.26223156, -0.03429704,  0.26169114,  0.26127023,  0.02962107],
       [ 0.26259217, -0.03496377,  0.26145193,  0.26773441,  0.02942868],
       [ 0.26583775, -0.03354123,  0.26240878,  0.26358757,  0.02925554]])

Output data (ys) :

array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
         0.,  0.,  0.,  0.,  0.,  0.]])

The training set is split 70% training and 30% validation.

Training this network can see the loss and val_loss decreases but acc and val_acc remain static at 0.5714 and 0 respectively :

Train on 7 samples, validate on 3 samples
Epoch 1/60
0s - loss: 4.4333 - acc: 0.0000e+00 - val_loss: 4.4340 - val_acc: 0.0000e+00
Epoch 2/60
0s - loss: 4.4335 - acc: 0.0000e+00 - val_loss: 4.4338 - val_acc: 0.0000e+00
Epoch 3/60
0s - loss: 4.4331 - acc: 0.0000e+00 - val_loss: 4.4335 - val_acc: 0.0000e+00
Epoch 4/60
0s - loss: 4.4319 - acc: 0.0000e+00 - val_loss: 4.4331 - val_acc: 0.0000e+00
Epoch 5/60
0s - loss: 4.4300 - acc: 0.0000e+00 - val_loss: 4.4326 - val_acc: 0.0000e+00
Epoch 6/60
0s - loss: 4.4267 - acc: 0.0000e+00 - val_loss: 4.4320 - val_acc: 0.0000e+00
Epoch 7/60
0s - loss: 4.4270 - acc: 0.1429 - val_loss: 4.4314 - val_acc: 0.0000e+00
Epoch 8/60
0s - loss: 4.4257 - acc: 0.1429 - val_loss: 4.4307 - val_acc: 0.0000e+00
Epoch 9/60
0s - loss: 4.4240 - acc: 0.0000e+00 - val_loss: 4.4300 - val_acc: 0.0000e+00
Epoch 10/60
0s - loss: 4.4206 - acc: 0.1429 - val_loss: 4.4292 - val_acc: 0.0000e+00
Epoch 11/60
0s - loss: 4.4192 - acc: 0.1429 - val_loss: 4.4284 - val_acc: 0.0000e+00
Epoch 12/60
0s - loss: 4.4156 - acc: 0.4286 - val_loss: 4.4276 - val_acc: 0.0000e+00
Epoch 13/60
0s - loss: 4.4135 - acc: 0.4286 - val_loss: 4.4267 - val_acc: 0.0000e+00
Epoch 14/60
0s - loss: 4.4114 - acc: 0.5714 - val_loss: 4.4258 - val_acc: 0.0000e+00
Epoch 15/60
0s - loss: 4.4072 - acc: 0.7143 - val_loss: 4.4248 - val_acc: 0.0000e+00
Epoch 16/60
0s - loss: 4.4046 - acc: 0.4286 - val_loss: 4.4239 - val_acc: 0.0000e+00
Epoch 17/60
0s - loss: 4.4012 - acc: 0.5714 - val_loss: 4.4229 - val_acc: 0.0000e+00
Epoch 18/60
0s - loss: 4.3967 - acc: 0.5714 - val_loss: 4.4219 - val_acc: 0.0000e+00
Epoch 19/60
0s - loss: 4.3956 - acc: 0.5714 - val_loss: 4.4209 - val_acc: 0.0000e+00
Epoch 20/60
0s - loss: 4.3906 - acc: 0.5714 - val_loss: 4.4198 - val_acc: 0.0000e+00
Epoch 21/60
0s - loss: 4.3883 - acc: 0.5714 - val_loss: 4.4188 - val_acc: 0.0000e+00
Epoch 22/60
0s - loss: 4.3849 - acc: 0.5714 - val_loss: 4.4177 - val_acc: 0.0000e+00
Epoch 23/60
0s - loss: 4.3826 - acc: 0.5714 - val_loss: 4.4166 - val_acc: 0.0000e+00
Epoch 24/60
0s - loss: 4.3781 - acc: 0.5714 - val_loss: 4.4156 - val_acc: 0.0000e+00
Epoch 25/60
0s - loss: 4.3757 - acc: 0.5714 - val_loss: 4.4145 - val_acc: 0.0000e+00
Epoch 26/60
0s - loss: 4.3686 - acc: 0.5714 - val_loss: 4.4134 - val_acc: 0.0000e+00
Epoch 27/60
0s - loss: 4.3666 - acc: 0.5714 - val_loss: 4.4123 - val_acc: 0.0000e+00
Epoch 28/60
0s - loss: 4.3665 - acc: 0.5714 - val_loss: 4.4111 - val_acc: 0.0000e+00
Epoch 29/60
0s - loss: 4.3611 - acc: 0.5714 - val_loss: 4.4100 - val_acc: 0.0000e+00
Epoch 30/60
0s - loss: 4.3573 - acc: 0.5714 - val_loss: 4.4089 - val_acc: 0.0000e+00
Epoch 31/60
0s - loss: 4.3537 - acc: 0.5714 - val_loss: 4.4078 - val_acc: 0.0000e+00
Epoch 32/60
0s - loss: 4.3495 - acc: 0.5714 - val_loss: 4.4066 - val_acc: 0.0000e+00
Epoch 33/60
0s - loss: 4.3452 - acc: 0.5714 - val_loss: 4.4055 - val_acc: 0.0000e+00
Epoch 34/60
0s - loss: 4.3405 - acc: 0.5714 - val_loss: 4.4044 - val_acc: 0.0000e+00
Epoch 35/60
0s - loss: 4.3384 - acc: 0.5714 - val_loss: 4.4032 - val_acc: 0.0000e+00
Epoch 36/60
0s - loss: 4.3390 - acc: 0.5714 - val_loss: 4.4021 - val_acc: 0.0000e+00
Epoch 37/60
0s - loss: 4.3336 - acc: 0.5714 - val_loss: 4.4009 - val_acc: 0.0000e+00
Epoch 38/60
0s - loss: 4.3278 - acc: 0.5714 - val_loss: 4.3998 - val_acc: 0.0000e+00
Epoch 39/60
0s - loss: 4.3254 - acc: 0.5714 - val_loss: 4.3986 - val_acc: 0.0000e+00
Epoch 40/60
0s - loss: 4.3205 - acc: 0.5714 - val_loss: 4.3975 - val_acc: 0.0000e+00
Epoch 41/60
0s - loss: 4.3171 - acc: 0.5714 - val_loss: 4.3963 - val_acc: 0.0000e+00
Epoch 42/60
0s - loss: 4.3150 - acc: 0.5714 - val_loss: 4.3952 - val_acc: 0.0000e+00
Epoch 43/60
0s - loss: 4.3106 - acc: 0.5714 - val_loss: 4.3940 - val_acc: 0.0000e+00
Epoch 44/60
0s - loss: 4.3064 - acc: 0.5714 - val_loss: 4.3929 - val_acc: 0.0000e+00
Epoch 45/60
0s - loss: 4.3009 - acc: 0.5714 - val_loss: 4.3917 - val_acc: 0.0000e+00
Epoch 46/60
0s - loss: 4.2995 - acc: 0.5714 - val_loss: 4.3905 - val_acc: 0.0000e+00
Epoch 47/60
0s - loss: 4.2972 - acc: 0.5714 - val_loss: 4.3894 - val_acc: 0.0000e+00
Epoch 48/60
0s - loss: 4.2918 - acc: 0.5714 - val_loss: 4.3882 - val_acc: 0.0000e+00
Epoch 49/60
0s - loss: 4.2886 - acc: 0.5714 - val_loss: 4.3871 - val_acc: 0.0000e+00
Epoch 50/60
0s - loss: 4.2831 - acc: 0.5714 - val_loss: 4.3859 - val_acc: 0.0000e+00
Epoch 51/60
0s - loss: 4.2791 - acc: 0.5714 - val_loss: 4.3848 - val_acc: 0.0000e+00
Epoch 52/60
0s - loss: 4.2774 - acc: 0.5714 - val_loss: 4.3836 - val_acc: 0.0000e+00
Epoch 53/60
0s - loss: 4.2714 - acc: 0.5714 - val_loss: 4.3824 - val_acc: 0.0000e+00
Epoch 54/60
0s - loss: 4.2696 - acc: 0.5714 - val_loss: 4.3813 - val_acc: 0.0000e+00
Epoch 55/60
0s - loss: 4.2641 - acc: 0.5714 - val_loss: 4.3801 - val_acc: 0.0000e+00
Epoch 56/60
0s - loss: 4.2621 - acc: 0.5714 - val_loss: 4.3790 - val_acc: 0.0000e+00
Epoch 57/60
0s - loss: 4.2569 - acc: 0.5714 - val_loss: 4.3778 - val_acc: 0.0000e+00
Epoch 58/60
0s - loss: 4.2556 - acc: 0.5714 - val_loss: 4.3767 - val_acc: 0.0000e+00
Epoch 59/60
0s - loss: 4.2492 - acc: 0.5714 - val_loss: 4.3755 - val_acc: 0.0000e+00
Epoch 60/60
0s - loss: 4.2446 - acc: 0.5714 - val_loss: 4.3744 - val_acc: 0.0000e+00
Out[23]:
<keras.callbacks.History at 0x7fbb9c4c7a58>

The source for my network is :

from keras.callbacks import History 
history = History()

from keras import optimizers

model = Sequential()

model.add(Dense(100, activation='softmax', input_dim=inputDim))
model.add(Dropout(0.2))
model.add(Dense(200, activation='softmax'))
model.add(Dropout(0.2))
model.add(Dense(84, activation='softmax'))

sgd = optimizers.SGD(lr=0.0009, decay=1e-10, momentum=0.9, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd , metrics=['accuracy'])
model.fit(xs,ys , validation_split=0.3 , verbose=2 , callbacks=[history] , epochs=60,batch_size=32)

Some simple statistics of my training data :

    0   1   2   3   4
count   10.000000   10.000000   10.000000   10.000000   10.000000
mean    0.275118    -0.033855   0.273101    0.277016    0.030270
std 0.011664    0.001594    0.011386    0.012060    0.000746
min 0.261630    -0.035711   0.259897    0.261270    0.029256
25% 0.263404    -0.035207   0.261871    0.267094    0.029756
50% 0.274853    -0.033919   0.273771    0.276760    0.030201
75% 0.284981    -0.032777   0.283758    0.288072    0.030692
max 0.290841    -0.030872   0.287884    0.293469    0.031718

generated using :

import pandas as pd
pd.DataFrame(xs).describe()

The standard deviation is very low for this dataset, is this a cause of my network not converging ?

Are there other modification's I can try in order to improve the training and validation accuracies of this network ?

Update :

First and fourth training examples :

[0.28555165, -0.03237782,  0.28525293,  0.2898103 ,  0.03093571]
[0.27617554, -0.03335768,  0.27927279,  0.28285823,  0.03015975] 

contain same target mappings :

     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.

     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
     0.,  0.,  0.,  0.,  0.,  0.

Is there a property of these training examples that could be skewing the results ? I understanding a large amount of training data is required to train a neural network but this does not explain why loss and val_loss decrease but evaluations of training accuracy and validation accuracy : acc and val_acc remain static ?

1
You have a lot of classes. I would advise leaving network for more epochs.Marcin Możejko
Just to be sure, are you really trying to solve an 84-class classification problem with 7 training samples of 5 features each?desertnaut

1 Answers

9
votes

First, I must warn you of some things that are not quite ok here:

  1. It seems that you are attempting a classification problem of 84 classes with only 10 data samples (7 for training and 3 for validation). This is definitely way too few data to attempt creating a successful deep learning model (most deep learning problems require at least thousands data sample, others even up to millions). For starters, you don't even have one data sample for all your categories, so it seems to me like a lost cause given that few data.

    Seems that you are already aware of this as per what you indicate in your post. You also say that even though it does not explain the strange behavior of your accuracy, but I must say that it would not be a good idea to conclude that given these conditions. Having that few data samples surely can result in unexpected/erratic behavior on your training, so it is no wonder you metrics are behaving strangely.

  2. I see you have used softmax activation in all the layers of your model. In my experience this is not a good idea for a classification problem. A current "standard" for classification with deep learning models is to use ReLU activations for the inner layers and leave softmax only for the output layer.

    This is because softmax return a probability distribution for your N classes (where they all sum to 1), so it helps to obtain the most probable class among your choices. This also means that softmax is going to "squash" or modify the input values, so they are all between [0, 1], which could affect your training process when applied to all layers, as it will not give you the same activation values that other sigmoidal functions would give. In simpler words, you are somewhat "normalizing" your values on each layer of your model, not letting the data "speak for itself".


Now, if we look at your 4 metrics during training, we can see that your acc is not so static as you think: its first epochs stays at 0.0, then on epoch 7 it starts to increase, until epoch 17 where it reaches 0.5714 and seems to reach an asymptotic limit.

We can also see that your loss metric had really few improvement, starting on 4.4333 and ending on 4.2446 with several ups and downs in between. Given this evidence it seems that your model has Overfitted: that is, it learned by memory your 7 training samples but did not really learned the representation of your model. When given 3 data it has never seen it failed in all of them. This is no surprise, as you have really few and unbalanced data and the other aspects mentioned before.


Are there other modification's I can try in order to improve the training and validation accuracies of this network ?

Besides getting more data and possibly redesigning your network architecture there is one other thing that could be affecting you, and is the validation_split parameter. You have correctly used it, by specifying the desired ratio for test and train data. However, reading from the Keras FAQ How is the validation split computed?, we can see that:

If you set the validation_split argument in model.fit to e.g. 0.1, then the validation data used will be the last 10% of the data. If you set it to 0.25, it will be the last 25% of the data, etc. Note that the data isn't shuffled before extracting the validation split, so the validation is literally just the last x% of samples in the input you passed.

This means that by specifying a validation split of 0.3 you are always using the last 3 data elements as validation. What you can do is either shuffle all your data before calling fit, or use the validation_data parameter instead, specifying as a tuple your (X_test, Y_test) you wish to use with your data (with sklearn's train_test_split for example). I hope that this helps you with your problem, good luck.