2
votes

I have my cognitive vision API model trained and have exported it (tried two formats: TensorFlow and SavedModel).

Now I would love to load this exported model in a Python script, ideally using Keras rather than native Tensorflow. I would like to print out the summary() of the model and copy the layers to retrain it in a custom Python script.

However, I don't seem to get this to work:

Loading this using the SavedModel format

With the following code:

import tensorflow as tf
loaded = tf.saved_model.load(export_dir='mydir/savedmodel')
loaded.summary()

I get the following exception: 'AutoTrackable' object has no attribute 'summary', and it seems that the load method returned a AutoTrackable, rather than a Model.

Using GraphDef

Taking the following code from this link, creates a TensorFlow specific type that I don't really know how to transform into a Keras model.

import tensorflow as tf
import os

graph_def = tf.compat.v1.GraphDef()
labels = []

# These are set to the default names from exported models, update as needed.
filename = 'mydir/tf/model.pb'
labels_filename = "mydir/tf/labels.txt"

# Import the TF graph
with tf.io.gfile.GFile(filename, 'rb') as f:
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def, name='')

# Create a list of labels.
with open(labels_filename, 'rt') as lf:
    for l in lf:
        labels.append(l.strip())
1
Can you share script that you used to export a TensorFlow checkout as a Saved Model. The export model script performs a number of actions to prepare the model for inference:Creates and verifies a serving signature Converts.variables to constants (Also known as graph freezing ). Outputs a saved modelRam-msft

1 Answers

0
votes

The TensorFlow saved model​ format to load the model at run time. https://www.tensorflow.org/guide/saved_model#building_a_savedmodel

Exporting the model can be done using a Python script that loads the model, creates a signature and then saves the model in the saved model format.

To persist it when we upload and read it for registration.If you are using ./output to send the file to output, Inside your train.py script, you just need to do something like this:

#persist the model to the local machine
tf.saved_model.save(model,'./outputs/model/')

#register the model with run object
run.register_model(model_name,'./outputs/model/')

I found a below link that shows that I can export the estimator as a tf.saved_model.

https://guillaumegenthial.github.io/serving-tensorflow-estimator.html