I'm going through the 'Expert MINST' tf tutorial (https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html) and I'm stuck on this part:
Densely Connected Layer
Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU.
Why the number 1024? Where did that come from?
My understanding with the Fully Connected Layer is that it has to somehow get back to the original image size (and then we start plugging things into our softmax equation). In this case, the original image size is Height x Width x Channels = 28*28*1 = 784... not 1024.
What am I missing here?