1
votes

I trained a FCN model in Tensorflow following implementation in link and saved the complete model as checkpoint, Now I want to use the saved model(pre-trained) for different problem. I tried to restore the model from checkpoint by specifying the weights in Saver as:

saver = tf.train.Saver({"weights" : [w1_1,w1_2,w2_1,w2_2,w3_1,w3_2,w3_3,w3_4, w4_1, w4_2, w4_3, w4_4,w5_1,w5_2,w5_3,w6,w7]})

I am getting weights as:

w1_1=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,scope='inference/conv1_1_w')

and so on.... I am not able to restore it successfully (up to specific layer). Tensorflow version:0.12r

1
Can you please share the error that you're seeing? As far as I can tell, the type of your first argument to tf.train.Saver is not quite right: instead of a dictionary mapping one key to a list of variables, it will expect a dictionary mapping keys to individual variables. - mrry
@mrry you are exactly right, but I am getting individual weights from the list, can you explain your point in details please? - Junaid Younad

1 Answers

2
votes

Either you can call init = tf.initialize_variables([list_of_vars]) followed by sess.run(init) and that would reinitialize those variables for you, or you can recreate the graph with same structure from the point where you want to freeze the weights but keep different names for variables. Further in case you only want to train certain variables only, you can pass those variables only to optimizer. tf.train.AdamOptimizer(learning_rate).minimize(loss,var_list = [wi, wj, ....])