I'm trying to make a convolutional network that accepts grayscale images of any size:
inputs = tf.placeholder(tf.float32, [None, None, None, 1])
Output of the last convolutional layer has a shape [None, None, None, 512], where 512 is a number of channels. So far everything is great. Problem is, I need to collapse second and third dimension, so I need to reshape. But I don't know the second and third dimension at the graph build time, so I do this:
dims = tf.shape(output)
output = tf.reshape(output, [-1, dims[1] * dims[2], 512])
I would expect the final shape to be [?, ?, 512], but it is [?,?,?]. I need to know the last dimension at the build time later in the code, so is there a way to reshape the output tensor in a way that preserves the size of the last dimension? Thank you.