1
votes

My problem is that if the arguments of an operation are constant, TF caches the results:

a = tf.constant(np.random.randn(*(100, 101)))
b = tf.constant(np.random.randn(*(100, 102)))
c = tf.constant(np.random.randn(*(101, 102)))
# Some expensive operation.
res = tf.einsum('si,sj,ij->s', a, b, c)
%timeit sess.run(res)

The slowest run took 577.76 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 137 µs per loop

If I'm generating the tensors from scratch on each run, then I'm also counting the overhead on tensor generation:

a = tf.random_normal((100, 101))
b = tf.random_normal((100, 102))
c = tf.random_normal((101, 102))
res = tf.einsum('si,sj,ij->s', a, b, c)
%timeit sess.run(res)

The slowest run took 4.07 times longer than the fastest. This could mean that an intermediate result is being cached. 10 loops, best of 3: 28 ms per loop

Maybe in this particular example the overhead is not large, but for cheaper operations it can be significant.

Is there any way to freeze the arguments so they will not be recomputed on each sess.run(), but suppress all other caching?

1

1 Answers

0
votes

On each run (across sessions), whatever tensor objects that you pass to sess.run() will be evaluated. From the docs:

A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.

It's not possible to ignore evaluating(computing the values of) the tensors across sessions, as long as it's part of the expression to be evaluated.

In your example, the tensor objects a, b, c are always evaluated in each session since the values of them are needed to compute einsum. But, within a session, they're computed only once and cached across runs.

But, within a session, you can make it to get evaluated only once and use it elsewhere. But again this works only within a session.