I am working with Tensorflow 1.4.
I created a custom tf.estimator in order to do classification, like this:
def model_fn():
# Some operations here
[...]
return tf.estimator.EstimatorSpec(mode=mode,
predictions={"Preds": predictions},
loss=cost,
train_op=loss,
eval_metric_ops=eval_metric_ops,
training_hooks=[summary_hook])
my_estimator = tf.estimator.Estimator(model_fn=model_fn,
params=model_params,
model_dir='/my/directory')
I can train it easily:
input_fn = create_train_input_fn(path=train_files)
my_estimator.train(input_fn=input_fn)
where input_fn is a function that reads data from tfrecords files, with the tf.data.Dataset API.
As I am reading from tfrecords files, I don't have labels in memory when I am making predictions.
My question is, how can I have predictions AND labels returned, either by the predict() method or the evaluate() method?
It seems there is no way to have both. predict() does not have access (?) to labels, and it is not possible to access the predictions dictionary with the evaluate() method.
evaluate
call won't return the labels, because it runs a loop over all your dataset and computes aggregated metrics, which are then returned. If you want to have for each batch both the prediction and the labels, you'll have to load the model from the checkpoint, make atf.Session()
and loopsess.run([predictions, labels])
calls until your data is exausted. – GPhilotf.get_default_graph().get_tensor_by_name('logits')
(adapt as needed) and run your graph – GPhilo