3
votes

I am running the tf.contrib.learn wide and deep model in TensorFlow serving and to export the trained model I am using the piece of code

 with tf.Session() as sess:
      init_op = tf.initialize_all_variables()
      saver = tf.train.Saver()
      m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
      print('model successfully fit!!')
      results = m.evaluate(input_fn=lambda: input_fn(df_test), steps=1)
      for key in sorted(results):
        print("%s: %s" % (key, results[key]))
      model_exporter = exporter.Exporter(saver)
      model_exporter.init(
      sess.graph.as_graph_def(),
      init_op=init_op,
      named_graph_signatures={
          'inputs': exporter.generic_signature({'input':df_train}),
          'outputs': exporter.generic_signature({'output':df_train[impressionflag]})})
      model_exporter.export(export_path, tf.constant(FLAGS.export_version), sess)
      print ('Done exporting!')

But while using the command saver = tf.train.Saver() the error ValueError: No variable to save is displayed enter image description here

How can I save the model, so that a servable is created which is required while loading the exported model in tensorflow standard server? Any help is appreciated.

3
Did you try sess.run(init_op) first? Does your graph have anything else?drpng
Yes, I tried using the sess.run(init_op), however I am still facing the same problem.Vasanti
Can you print more information here and compare against what you expect? You may want to try to instantiate the saver later.drpng

3 Answers

1
votes

The graphs and sessions are contained in Estimator and not exposed or leaked. Thus by using Estimator.export() we can export the model and create a servable which can be used to run on model_servers.

1
votes

Estimator.export() is now deprecated, so you need to use an Estimator.export_savedmodel().

Here I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.

TL;DR

To export an estimator there are four steps:

  1. Define features for export as a list of all features used during estimator initialization.

  2. Create a feature config using create_feature_spec_for_parsing.

  3. Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.

  4. Export the model using export_savedmodel().

To run a client script properly you need to do three following steps:

  1. Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/

  2. Create or modify corresponding BUILD file by adding a py_binary.

  3. Build and run a model server, e.g. tensorflow_model_server.

  4. Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.

For more details look at the tutorial itself.

Hope it helps.

0
votes

Does your graph have any variables then? If not and all the operations work with constants instead, you can specify a flag in the Saver constructor:
saver = tf.train.Saver(allow_empty=True)