1
votes

I am working on a 1 - hidden - layer Neural Network with 2000 neurons and 8 + constant input neurons for a regression problem.

In particular, as optimizer I am using RMSprop with learning parameter = 0.001, ReLU activation from input to hidden layer and linear from hidden to output. I am also using a mini-batch-gradient-descent (32 observations) and running the model 2000 times, that is epochs = 2000.

My goal is, after the training, to extract the weights from the best Neural Network out of the 2000 run (where, after many trials, the best one is never the last, and with best I mean the one that leads to the smallest MSE).

Using save_weights('my_model_2.h5', save_format='h5') actually works, but at my understanding it extract the weights from the last epoch, while I want those from the epoch in which the NN has perfomed the best. Please find the code I have written:

def build_first_NN():
  model = keras.Sequential([
    layers.Dense(2000, activation=tf.nn.relu, input_shape=[len(X_34.keys())]),
    layers.Dense(1)
  ])

  optimizer = tf.keras.optimizers.RMSprop(0.001)

  model.compile(loss='mean_squared_error',
                optimizer=optimizer,
                metrics=['mean_absolute_error', 'mean_squared_error']
                )
  return model



first_NN = build_first_NN()

history_firstNN_all_nocv = first_NN.fit(X_34, 
                                        y_34, 
                                        epochs = 2000)

first_NN.save_weights('my_model_2.h5', save_format='h5')

trained_weights_path = 'C:/Users/Myname/Desktop/otherfolder/Data/my_model_2.h5'

trained_weights = h5py.File(trained_weights_path, 'r')

weights_0 = pd.DataFrame(trained_weights['dense/dense/kernel:0'][:])
weights_1 = pd.DataFrame(trained_weights['dense_1/dense_1/kernel:0'][:]) 

The then extracted weights should be those from the last of the 2000 epochs: how can I get those from, instead, the one in which the MSE was the smallest?

Looking forward for any comment.

EDIT: SOLVED

Building on the received suggestions, as for general interest, that's how I have updated my code, meeting my scope:

# build_first_NN() as defined before

first_NN = build_first_NN()

trained_weights_path = 'C:/Users/Myname/Desktop/otherfolder/Data/my_model_2.h5'

checkpoint = ModelCheckpoint(trained_weights_path, 
                             monitor='mean_squared_error', 
                             verbose=1, 
                             save_best_only=True, 
                             mode='min')

history_firstNN_all_nocv = first_NN.fit(X_34, 
                                        y_34, 
                                        epochs = 2000,
                                        callbacks = [checkpoint])

trained_weights = h5py.File(trained_weights_path, 'r')

weights_0 = pd.DataFrame(trained_weights['model_weights/dense/dense/kernel:0'][:])
weights_1 = pd.DataFrame(trained_weights['model_weights/dense_1/dense_1/kernel:0'][:]) 

1
When using the fitcommand, you can specify the validation_split parameter which would do a validation check over the fraction of data. Using ModelCheckpoint can be helpful to extract the maximum out of 2000.Koralp Catalsakal

1 Answers

1
votes

Use ModelCheckpoint callback from Keras.

from keras.callbacks import ModelCheckpoint

checkpoint = ModelCheckpoint(filepath, monitor='val_mean_squared_error', verbose=1, save_best_only=True, mode='max')

use this as a callback in your model.fit() . This will always save the model with the highest validation accuracy (lowest MSE on validation) at the location specified by filepath.

You can find the documentation here. Of course, you need validation data during training for this. Otherwise I think you can probably be able to check on lowest training MSE by writing a callback function yourself.