2
votes

I am really new to the TensorFlow so bear with me plz even if this question is a total nonsense...

I have a code which

1) defines the network like

x = tf.placeholder(tf.float32, shape=[None, 784], name='input')
y_ = tf.placeholder(tf.float32, shape=[None, 10], name='reference')
...
fc_b2_hist = tf.summary.histogram('b_fc2', b_fc2)

2) then restoring the model with

with tf.Session() as sess:
  #NOTE
  #sess.run(tf.initialize_all_variables())
  model_path = tf.train.latest_checkpoint(model_path)
  saver = tf.train.import_meta_graph(model_path+'.meta')
  saver.restore(sess, model_path)

  all_vars = tf.trainable_variables()
    for v in all_vars:
    print(sess.run(v))

this code, which restores the model, works perfectly fine when run in the separate file. however, when run on this, it aborts with the following error message

Traceback (most recent call last): File "lenet_my.py", line 160, in print(sess.run(v)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 766, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 964, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1014, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1034, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value lenet_model/conv_pool_1/W_conv1 [[Node: _send_lenet_model/conv_pool_1/W_conv1_0 = _SendT=DT_FLOAT, client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=422131278131772803, tensor_name="lenet_model/conv_pool_1/W_conv1:0", _device="/job:localhost/replica:0/task:0/cpu:0"]]

after I saw this message for the first time, I uncommented the line under #NOTE, which is

sess.run(tf.initialize_all_variables())

it did not show such error, but the pretrained variables were not restored and was initialized by how I defined it while defining the network.

So I have two questions!

First, I don't get what the difference is between running the code in a separate file and running it in one file to get such HORRIFYING error message Second, I don't get why initializing the variables then restoring the model with the code written above does not restore the previously trained variables.

Thnx in advance

2
Were you able to fix it?WhatAMesh

2 Answers

0
votes

I think what might help is not running tf.train.import_meta_graph(). The import from .meta file will create a new graph as specified in the file, which you do not need as you just built your own graph.

Just say:

saver = tf.train.Saver()
with tf.Session() as sess:
  model_path = tf.train.latest_checkpoint(model_path)
  saver.restore(sess, model_path)
-1
votes

Maybe you should put

saver=tf.train.import_meta_graph(model_path+'.meta')

out of your ‘Session’

Here is a sclie of my code:

saver = tf.train.import_meta_graph('./models/xxx.ckpt-30000.meta')
with tf.Session() as sess:
    saver.restore(sess,'./models/xxx.ckpt-30000')

Hope it is useful