0
votes

I try to display the accuracy and loss of my net with Tensorboard as graphs, but the training and validation data are shown as separate runs. I am still relatively inexperienced with Tensorflow and Tensorboard, so I hope you can see the reason for this

Here is my code:

import os
import time
import pickle
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard

print("Loading Data via Pickel")
X = pickle.load(open("X.pickle", "rb"))
y = pickle.load(open("y.pickle", "rb"))

print(len(X))
print(len(y))

startTime = time.time()
hidden_dense_layers = [0,1,2]
hidden_dense_layer_size = [64, 128, 256, 512, 1024]

for dense_layer_ammount in hidden_dense_layers:
    for dense_layer_size in hidden_dense_layer_size:
        NAME = "{}-hidden_layers-{}-layersize".format(dense_layer_ammount, dense_layer_size)
        print("----------", NAME, "----------")

        print("Building Model")
        # model = keras.Sequential([
        #     keras.layers.Flatten(input_shape=(200, 200)),
        #     keras.layers.Dense(500, activation="relu"),
        #     keras.layers.Dense(1, activation="sigmoid")
        # ])

        model = keras.Sequential()

        model.add(keras.layers.Flatten(input_shape=(75, 75)))

        for i in range(dense_layer_ammount):
            model.add(keras.layers.Dense(dense_layer_size, activation="relu"))

        model.add(keras.layers.Dense(1, activation="sigmoid"))

        model.compile(loss='binary_crossentropy',
                      optimizer='adam',
                      metrics=['accuracy'])

        print("Creating Callbacks")

        print("Creating Checkpoint Callback")
        checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
        checkpoint_dir = os.path.dirname(checkpoint_path)

        # Create a callback that saves the model's weights
        checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
            filepath=checkpoint_path,
            save_weights_only=True,
            verbose=1
        )

        print("Creating Tensorboard Callback")
        tensorboard_callback = TensorBoard(log_dir="logs/{}".format(NAME))

        print("Training Model")
        model.fit(
            X,
            y,
            # batch_size=32,
            epochs=10,
            callbacks=[
                # checkpoint_callback,
                tensorboard_callback
            ],
            validation_split=0.3
        )

Here is how the runs are Displayed for me

Here is how the runs are Displayed for me

Here is how the Graphs are displayed to me

Here is how the Graphs are displayed to me

1

1 Answers

1
votes

It is completely normal to have two curves for both graphs. Each curve corresponds to training data or validation data (resp. orange and blue on your plots). To each epoch you get a two-step process:

  • first you get the actual model parameter tuning with gradient descent, the training step. The blue curve tells you learn something (e.g.: is the model complex enough for the given task ?).
  • secondly you need to make sure that the trained model is performing well on data that have not been used to tune the parameter, this is the validation step. The red curve will tell you how close you are to an overfitting situation (meaning that you get good performances for the tuning part, but that the model is very bad when feeding with "new data").