I am starting with deep learning stuff using keras and tensorflow. At very first stage i am stuck with a doubt. when I use tf.contrib.layers.flatten
(Api 1.8) for flattening a image (could be multichannel as well).
How is this different than using flatten function from numpy?
How does this affect the training. I can see the tf.contrib.layers.flatten
is taking longer time than numpy flatten. Is it doing something more?
This is a very close question but here the accepted answer includes Theano and does not solve my doubts exactly.
Example:
Lets say i have a training data of (10000,2,96,96)
shape. Now I need the output to be in (10000,18432)
shape. I can do this using tensorflow flatten or by using numpy flatten like
X_reshaped = X_train.reshape(*X_train.shape[:1], -2)
what difference does it make in training and which is the best practice?
X_reshaped.print()
? – Mohammad Athar(10000,2,96,96)
refers to(num_images, num_colourchannels, x_pixel, y_pixel)
? On several different occasions I have seen shapes as(num_images, x_pixel, y_pixel, num_colourchannels)
. Does your choice make a difference, and how did you motivate it? Thanks! – NeStack