0
votes

I have a text classification model

I train the model with batch_size=32

The model is over fitting I tried to add Dropout layer it did not help here is my model

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
embedding (Embedding)       (None, 525, 300)          6845100   
                                                             
conv1d (Conv1D)             (None, 525, 64)           134464    
                                                             
max_pooling1d (MaxPooling1D)  (None, 262, 64)          0         
                                                               
                                                             
dropout (Dropout)           (None, 262, 64)           0         
                                                             
conv1d_1 (Conv1D)           (None, 262, 64)           28736     
                                                             
global_max_pooling1d (GlobalMaxPooling1D)  (None, 64)               0         
                                               
                                                             
dropout_1 (Dropout)         (None, 64)                0         
                                                             
dense (Dense)               (None, 256)               16640     
                                                             
dense_1 (Dense)             (None, 5)                 1285      
                                                             
=================================================================

Here is the output

Epoch 1/1000
354/354 [==============================] - 7s 20ms/step - loss: 0.2824 - accuracy: 0.8955 - val_loss: 0.5116 - val_accuracy: 0.8111
Epoch 2/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.2689 - accuracy: 0.8978 -     val_loss: 0.5353 - val_accuracy: 0.8323
Epoch 3/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.2550 - accuracy:     0.8945 - val_loss: 0.5608 - val_accuracy: 0.7740
Epoch 4/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.2393 - accuracy: 0.8928 - val_loss: 0.5669 - val_accuracy: 0.8160
Epoch 5/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.2220 - accuracy: 0.8893 - val_loss: 0.5919 - val_accuracy: 0.7941
Epoch 6/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.2100 - accuracy: 0.8929 -     val_loss: 0.6458 - val_accuracy: 0.7867
Epoch 7/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.1979 - accuracy: 0.8890 - val_loss: 0.6809 - val_accuracy: 0.7956
Epoch 8/1000
354/354 [==============================] - 5s 15ms/step - loss: 0.1874 - accuracy: 0.8830 - val_loss: 0.6853 - val_accuracy: 0.8008
Epoch 9/1000
354/354 [==============================] - 5s 15ms/step - loss: 0.1764 - accuracy: 0.8816 - val_loss: 0.7130 - val_accuracy: 0.8001
Epoch 10/1000
354/354 [==============================] - 5s 14ms/step - loss: 0.1683 - accuracy: 0.8845 -   val_loss: 0.7139 - val_accuracy: 0.8001
Epoch 11/1000
354/354 [==============================] - 5s 15ms/step - loss: 0.1589 - accuracy: 0.8842 - val_loss: 0.7663 - val_accuracy: 0.8266

And here is the Plots

AccLoss