2
votes

I have a lot of troubles with saving / restoring tensorflow models, either my "Kernels seems to have died" or I get errors ("Variable ... already exits").

When my Kernel dies, I get this error log in my console:

[I 21:13:41.505 NotebookApp] Saving file at /Nanodegree_MachineLearning/06_Capstone/capstone.ipynb
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
[I 21:17:05.416 NotebookApp] KernelRestarter: restarting kernel (1/5)
WARNING:root:kernel 81679b46-ec9b-4ce6-b5be-ae2d9cf01210 restarted
[I 21:17:41.778 NotebookApp] Saving file at /Nanodegree_MachineLearning/06_Capstone/capstone.ipynb
[19324:20881:1229/212110:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.UnknownMethod: Method "GetDisplayDevice" with signature "" on interface "org.freedesktop.UPower" doesn't exist

My code is as follows:

if __name__ == '__main__':
    if LEARN_MODUS:
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            steps_per_epoch = len(X_train) // BATCH_SIZE
            num_examples = steps_per_epoch * BATCH_SIZE

            # Train model
            for i in range(EPOCHS):
                for step in range(steps_per_epoch):
                    #Calculate next Batch
                    batch_start = step * BATCH_SIZE
                    batch_end = (step + 1) * BATCH_SIZE
                    batch_x = X_train[batch_start:batch_end] 
                    batch_y = y_train[batch_start:batch_end]

                    #Run Training
                    loss = sess.run(train_op, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5})

            try:
                saver
            except NameError:
                saver = tf.train.Saver()
            saver.save(sess, 'foo')
            print("Model saved")

To restore the model, I use:

predicions = tf.argmax(fc2,1)
predicted_classes = []

try:
    saver
except NameError:
    saver = tf.train.Saver()

with tf.Session() as sess:   
    saver = tf.train.import_meta_graph('foo.meta')
    saver.restore(sess, tf.train.latest_checkpoint('./'))

    predicted_classes = sess.run(predicions, feed_dict={x: X_test, keep_prob: 1.0})

I tried a lot of different ways, sometimes it works (but not always!?), sometimes it crashes and sometime I get the Variable error. Do I have to use the saving/restoring in an other way?

I am using: Ubuntu 14.04 Anaconda 3 Python 3.5.2 Tensorflow 0.12

inside jupyter notebook

Thank you!

1

1 Answers

3
votes

This can happen when you run out of memory, solution is to try smaller batch sizes. I see that you are feeding your test set into a single run call, which needs enough memory to do all examples at once. You can do something like eval_in_batches to aggregate accuracy over several smaller run calls