I am experimenting audio classification.
My code -
x_train = (800, 32, 1)
x_test = (200, 32, 1)
y_train = (800, 1)
y_test = (200, 1)
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=20, padding='same', input_shape=(32,1), activation="relu"))
model.add(MaxPooling1D(3))
model.add(Conv1D(filters=64, kernel_size=15, padding='same', activation="relu"))
model.add(MaxPooling1D(2))
model.add(Conv1D(filters=96, kernel_size=10, padding='same', activation="relu"))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(96, activation="relu"))
model.add(Dense(10, activation="softmax"))
model.compile(
loss ='sparse_categorical_crossentropy',
optimizer = Adam(lr=0.01),
metrics = ['accuracy']
)
model.summary()
red_lr= ReduceLROnPlateau(monitor='val_loss', patience=3, verbose=1, factor=0.001, mode='min')
check=ModelCheckpoint(filepath=r'/content/drive/My Drive/Colab Notebooks/genre/cnn.hdf5', verbose=1,save_best_only = True)
History = model.fit(x_train,y_train, epochs=30,batch_size=128,validation_data = (x_test, y_test),verbose = 2, callbacks=[check, red_lr,],shuffle=True )
I started with 1 layer and increases the layer for accuracy The best model I had had these values - (loss: 0.5385 - acc: 0.8275 - val_loss: 0.8758 - val_acc: 0.7400)
I run 4 to 5 times all are having the same pattern in val_acc and val_loss These two parameters are increasing gradually and after half of the epochs are executed it will become stable for the rest of the epochs...Like this,
Any suggestions to increase accuracy and, why the loss is not changing in half of the epochs