I am using the tf.estimator
API to train models.
As I understand, the model_fn
defines the computation graph, which returns a different tf.estimator.EstimatorSpec
according to the mode
.
In mode==tf.estimator.ModeKeys.TRAIN
, one can specify a train_op
to be called at each training iteration, which in turns changes trainable
instances of tf.Variable
, to optimise a certain loss.
Let's call the train_op optimizer
, and the variables A
and B
.
In order to speed up prediction and evaluation, I would like to have an auxiliary non-trainable tf.Variable
Tensor C
, exclusively dependent on the already trained variables. The values of this tensor would thus be exportable. This Tensor does not affect training loss. Let's assume we want:
C = tf.Variable(tf.matmul(A,B))
update_op = tf.assign(C, tf.matmul(A,B))
- What I tried:
Passing tf.group(optimizer, update_op)
as train_op
in the EstimatorSpec
works good but slows down training a lot, since the train_op
now updates C
at each iteration.
Because C
is only needed at eval/predict time, one call of update_op
at the end of training is enough.
Is it possible to assign a Variable at the end of training a tf.estimator.Estimator
?
model_fn
, so it isupdate_op
. Themodel_fn
is passed as a parameter to the init method of atf.estimator.Estimator
. I do not see any trivial way running theupdate_op
after call toestimator.train
has finished. As a reminder, I'm putting the (reference)[tensorflow.org/get_started/custom_estimators] on the Estimator API @ThomasPinetz – syltruong