1
votes

I am trying to use a tensorflow neural network in "interactive" mode: my goal would be to load a trained model, keeping it in memory, and then perform inference on it once in a while.

The problem is that apparently the tensorflow Estimator class (tf.estimator.Estimator) does not allow to do so.

The method predict (documentation, source) takes as input a batch of features and the path to the model. Then it creates a session, loads the model and perform the inference. After that, the session is closed and for a successive inference it is necessary to load the model again.

How could I achieve my desired behavior using the Estimator class?

Thank you

1

1 Answers

0
votes

You may want to have a look at tfe.make_template, its goal is precisely to make graph-based code available in eager mode.

Following the example given during the 2018 TF summit, that would give something like

def apply_my_estimator(x)
  return my_estimator(x)

t = tfe.make_template('f', apply_my_estimator, create_graph_function=True)
print(t(x))