0
votes

I am implementing convolutional neural network, but i am confused about the output size of convolution layer in tensorflow, so I have seen this rule in order to calculate the output of convolution, that is ( Size - Filter + 2P)/ Stride +1 so if we have an image of sie 256 x 256 gray-scale i.e channel = 1 with filter size 11 and zerro-padding = 0 and stride = 2. Then according to that rule, by substitution. The output will be (256 - 11)/2 + 1 = 123.5 i.e = 123 x 123 . But actually by implementing the same values in tensorflow, while printed out the result, I have seen, that the output is 128 x 128 !! how is that happen ?

update

IMAGE_H = 256
IMAGE_W = 256
batch_size = 10
num_hidden = 64
num_channels = 1
depth = 32

input = tf.placeholder(
     tf.float32, shape=(batch_size, IMAGE_H, IMAGE_W, num_channels))

w1 = tf.Variable(tf.random_normal([11, 11, num_channels,depth],stddev=0.1))
w2 = tf.Variable(tf.random_normal([7, 7, depth, depth], stddev=0.1))

b1 = tf.Variable(tf.zeros([depth]))
b2 = tf.Variable(tf.constant(1.0, shape=[depth]))

....
# model

conv1 = tf.nn.conv2d(input, w1 , [1, 2, 2, 1], padding='SAME')
print('conv1', conv1)
hidden_1 = tf.nn.relu(conv1 + b1)  
1
The math is right, the output shouldn't be 128² but 123². The bug is in your code: show itnessuno
@nessuno plz see the updatef.lust

1 Answers

2
votes

Use padding VALID in line:

conv1 = tf.nn.conv2d(input, w1 , [1, 2, 2, 1], padding='VALID')

padding 'SAME' means that preserve the same input size, and padding 'VALID' means that calculate the formula ( Size - Filter + 2P)/ Stride +1 and give a valid number to output.