6
votes

So I have trained inception model to recognize flowers according to this guide. https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html

bazel build tensorflow/examples/image_retraining:retrain
bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos

To classify the image via command line, I can do this:

bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg

But how do I serve this graph via Tensorflow serving?

The guide about setting up Tensorflow serving (https://tensorflow.github.io/serving/serving_basic) does not tell how to incorporate the graph (output_graph.pb). The server expects the different format of file:

$>ls /tmp/mnist_model/00000001
checkpoint export-00000-of-00001 export.meta
3

3 Answers

1
votes

You have to export the model. I have a PR that exports the model during retraining. The gist of it is below:

import tensorflow as tf

def export_model(sess, architecture, saved_model_dir):
  if architecture == 'inception_v3':
    input_tensor = 'DecodeJpeg/contents:0'
  elif architecture.startswith('mobilenet_'):
    input_tensor = 'input:0'
  else:
    raise ValueError('Unknown architecture', architecture)
  in_image = sess.graph.get_tensor_by_name(input_tensor)
  inputs = {'image': tf.saved_model.utils.build_tensor_info(in_image)}

  out_classes = sess.graph.get_tensor_by_name('final_result:0')
  outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}

  signature = tf.saved_model.signature_def_utils.build_signature_def(
    inputs=inputs,
    outputs=outputs,
    method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
  )

  legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

  # Save out the SavedModel.
  builder = tf.saved_model.builder.SavedModelBuilder(saved_model_dir)
  builder.add_meta_graph_and_variables(
    sess, [tf.saved_model.tag_constants.SERVING],
    signature_def_map={
      tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature
    },
    legacy_init_op=legacy_init_op)
  builder.save()

Above will create a variables directory and saved_model.pb file. If you put it under a parent directory representing the version number (e.g. 1/) then you can call tensorflow serving via:

tensorflow_model_server --port=9000 --model_name=inception --model_base_path=/path/to/saved_models/
2
votes

To serve the graph after you have trained it, you would need to export it using this api: https://www.tensorflow.org/versions/r0.8/api_docs/python/train.html#export_meta_graph

That api generates the metagraph def that is needed by the serving code ( this will generate that .meta file you are asking about)

Also, you need to restore a checkpoint using Saver.save() which is the Saver class https://www.tensorflow.org/versions/r0.8/api_docs/python/train.html#Saver

Once you have done this, you will both the metagraph def and the checkpoint files that are needed to restore the graph.

0
votes

Check out this gist how to load your .pb output graph in a Session:

https://github.com/eldor4do/Tensorflow-Examples/blob/master/retraining-example.py