2
votes

I am trying to use tensorflow lite toco converter in order to get tflite graph. I use the toco command line, in order to get the tflite graph. I don't get any errors during the conversion, but, it seems that my conversion contains garbage (the output is nans). I am asking for ideas on how to debug this conversion.

I do the following steps:

  1. load .ckpt files and convert to frozen_graph:

    with g.as_default(), g.device(device_t), \ tf.Session(config=soft_config) as sess: batch_shape = (batch_size,) + img_shape img_placeholder = tf.placeholder(tf.float32, shape=batch_shape, name='img_placeholder')

    preds = transform(img_placeholder)
    
    saver = tf.train.Saver()
    saver.restore(sess, checkpoint_dir)
    frozen_graph_def = tf.graph_util.convert_variables_to_constants(sess,sess.graph_def,['transformer/up-sample/mul'])
    with open('frozeen_graph.pb', 'wb') as f:
        f.write(frozen_graph_def.SerializeToString())
    

quastion: Is the code above is eqevatent to the usage of the script tensorflow/python/tools/freeze_graph.py ?

When i use the code above, i also check the freezing by laoding the frozen graph and pass an input image through it, and the output looks good. So it seems that the freezing works.

  1. Use toco convert command line (tried in tensorflow version 1.10 and tensorflow nightly-gpu)

toco \ --graph_def_file=frozeen_graph.pb \
--output_file=converted_graph.lite \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --input_shape=1,256,256,3 \ --input_array=img_placeholder:0 \ --output_array=transformer/up-sample/mul:0 \ --inference_type=FLOAT \ --input_data_type=FLOAT

I don't get any errors when i execute the above line. Note that I changed my graph in order to get rid of some "unsupported operation" errors.

  1. Next, I use the tensorflow lite interpreter in order to test the converted graph (using python API):

    tflite_graph_filename = 'converted_graph.lite'
    # Load TFLite model and allocate tensors.
    interpreter = 
    tf.contrib.lite.Interpreter(model_path=tflite_graph_filename)
    interpreter.allocate_tensors()
    
    # Get input and output tensors.
    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()
    
    input_shape = input_details[0]['shape']
    
    X = np.zeros(input_shape,np.float32)
    X[0] = content_image
    
    input_data = X 
    interpreter.set_tensor(input_details[0]['index'], input_data)
    
    interpreter.invoke()
    output_data = interpreter.get_tensor(output_details[0]['index'])
    

    unfortunately, the output_data is all nanes ,

Can somebody give me some suggestions for debugging or the right way of doing such a conversion?

Thanks ahead, Vadim

1

1 Answers

0
votes

An easy way of converting your .pb into .lite is answered here:

https://stackoverflow.com/a/58583419/11517841

And for making sure that you do not make any architecture related mistakes here is an FYI/"beware":

https://stackoverflow.com/a/58583602/11517841