4
votes

I'm trying to make a convolutional network that accepts grayscale images of any size:

inputs = tf.placeholder(tf.float32, [None, None, None, 1])

Output of the last convolutional layer has a shape [None, None, None, 512], where 512 is a number of channels. So far everything is great. Problem is, I need to collapse second and third dimension, so I need to reshape. But I don't know the second and third dimension at the graph build time, so I do this:

dims = tf.shape(output)
output = tf.reshape(output, [-1, dims[1] * dims[2], 512])

I would expect the final shape to be [?, ?, 512], but it is [?,?,?]. I need to know the last dimension at the build time later in the code, so is there a way to reshape the output tensor in a way that preserves the size of the last dimension? Thank you.

1
Which version of TensorFlow are you using? This works for me using the code at HEAD (and in the 0.10 release). - mrry
Yep, that was it, I used 0.9.0, it works in 0.10.0. Thanks! - user1760770

1 Answers

3
votes

This is a tricky case for TensorFlow's static shape inference, since it relies on propagating information about a partially known value. We added support for this case (and some similar cases that treat tensors as shapes) in release 0.10.