1
votes

I am confused about the follow code:

import tensorflow as tf
import numpy as np
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.framework import dtypes

'''
Randomly crop a tensor, then return the crop position
'''
def random_crop(value, size, seed=None, name=None):
    with ops.name_scope(name, "random_crop", [value, size]) as name:
        value = ops.convert_to_tensor(value, name="value")
        size = ops.convert_to_tensor(size, dtype=dtypes.int32, name="size")
        shape = array_ops.shape(value)
        check = control_flow_ops.Assert(
                math_ops.reduce_all(shape >= size),
                ["Need value.shape >= size, got ", shape, size],
                summarize=1000)
        shape = control_flow_ops.with_dependencies([check], shape)
        limit = shape - size + 1
        begin = tf.random_uniform(
                array_ops.shape(shape),
                dtype=size.dtype,
                maxval=size.dtype.max,
                seed=seed) % limit
        return tf.slice(value, begin=begin, size=size, name=name), begin

sess = tf.InteractiveSession()
size = [10]
a = tf.constant(np.arange(0, 100, 1))

print (a.eval())

a_crop, begin = random_crop(a, size = size, seed = 0)
print ("offset: {}".format(begin.eval()))
print ("a_crop: {}".format(a_crop.eval()))

a_slice = tf.slice(a, begin=begin, size=size)
print ("a_slice: {}".format(a_slice.eval()))

assert (tf.reduce_all(tf.equal(a_crop, a_slice)).eval() == True)
sess.close()

outputs:

[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
 96 97 98 99]
offset: [46]
a_crop: [89 90 91 92 93 94 95 96 97 98]
a_slice: [27 28 29 30 31 32 33 34 35 36]

There are two tf.slice options:

(1). called in function random_crop, such as tf.slice(value, begin=begin, size=size, name=name)

(2). called as a_slice = tf.slice(a, begin=begin, size=size)

The parameters (values, begin and size) of those two slice operations are the same.

However, why the printed values a_crop and a_slice are different and tf.reduce_all(tf.equal(a_crop, a_slice)).eval() is True?

Thanks

EDIT1 Thanks @xdurch0, I understand the first question now. Tensorflow random_uniform seems like a random generator.

import tensorflow as tf
import numpy as np

sess = tf.InteractiveSession()
size = [10]
np_begin = np.random.randint(0, 50, size=1)
tf_begin = tf.random_uniform(shape = [1], minval=0, maxval=50, dtype=tf.int32, seed = 0)
a = tf.constant(np.arange(0, 100, 1))

a_slice = tf.slice(a, np_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, np_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))

a_slice = tf.slice(a, tf_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, tf_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))

sess.close()

output

a_slice: [42 43 44 45 46 47 48 49 50 51]
a_slice: [42 43 44 45 46 47 48 49 50 51]
a_slice: [41 42 43 44 45 46 47 48 49 50]
a_slice: [29 30 31 32 33 34 35 36 37 38]
1
tf.random_uniform returns different values each time it is evaluated, so comparing things based on different evaluations of these random values is not sensible. - xdurch0
@xdurch0, your are right. Interesting, Tensorflow document mentioned that tf.random_uniform returns A tensor of the specified shape filled with random uniform values. (tensorflow.org/api_docs/python/tf/random_uniform). But, it sounds like a random generator for me now, not values.. But why `tf.reduce_all(tf.equal(a_crop, a_slice)).eval() is True? Thanks, - user200340
To make things clearer, random operations produce a different value on each call to run or eval. So evaluating the tf.equal(...) operation works because only one random value is generated and both slices are computed from it. If you use the tf.Session object and call run((a_crop, a_slice, tf.reduce_all(tf.equal(a_crop, a_slice))) you receive two equal arrays and True. - jdehesa
@jdehesa Sorry, i am not sure if i understand So evaluating the tf.equal(...) operation works because only one random value is generated and both slices are computed from it. Then why the printed out values are different between a_crop and a_slice if only one random value is generated and both slices are computed from it? Thanks - user200340
@user200340 One random value is generated each time a TensorFlow computation is issued (a call to tf.Session.run, or .eval()). Whenever you call .eval(), that is one new computation, and a new random value is produced. Maybe you can see it more clearly like this, if you do tf.stack([a_crop, a_slice]).eval() you will get a tensor with to equal rows. If you call tf.Session.run with multiple tensors, all the computations in that call will use the same random values. Does that make it any clearer? - jdehesa

1 Answers

1
votes

The confusing thing here is that tf.random_uniform (like every random operation in TensorFlow) produces a new, different value on each evaluation call (each call to .eval() or, in general, each call to tf.Session.run). So if you evaluate a_crop you get one thing, if you then evaluate a_slice you get a different thing, but if you evaluate tf.reduce_all(tf.equal(a_crop, a_slice)) you get True, because all is being computed in a single evaluation step, so only one random value is produced and it determines the value of both a_crop and a_slice. Another example is this, if you run tf.stack([a_crop, a_slice]).eval() you will get a tensor with to equal rows; again, only one random value was produced. More generally, if you call tf.Session.run with multiple tensors to evaluate, all the computations in that call will use the same random values.

As a side note, if you actually need a random value in a computation that you want to maintain for a later computation, the easiest thing would be to just retrieve if with tf.Session.run, along with any other needed computation, to feed it back later through feed_dict; or you could have a tf.Variable and store the random value there. A more advanced possibility would be to use partial_run, an experimental API that allows you to evaluate part of the computation graph and continue evaluating it later, while maintaining the same state (i.e. the same random values, among other things).