2
votes

I'm using tensorflow batch normalization in my deep neural network successfully. I'm doing it the following way:

if apply_bn:
    with tf.variable_scope('bn'):
        beta = tf.Variable(tf.constant(0.0, shape=[out_size]), name='beta', trainable=True)
        gamma = tf.Variable(tf.constant(1.0, shape=[out_size]), name='gamma', trainable=True)
        batch_mean, batch_var = tf.nn.moments(z, [0], name='moments')
        ema = tf.train.ExponentialMovingAverage(decay=0.5)

        def mean_var_with_update():
            ema_apply_op = ema.apply([batch_mean, batch_var])
            with tf.control_dependencies([ema_apply_op]):
                return tf.identity(batch_mean), tf.identity(batch_var)

        mean, var = tf.cond(self.phase_train,
                            mean_var_with_update,
                            lambda: (ema.average(batch_mean), ema.average(batch_var)))

        self.z_prebn.append(z)
        z = tf.nn.batch_normalization(z, mean, var, beta, gamma, 1e-3)
        self.z.append(z)

        self.bn.append((mean, var, beta, gamma))

And it works fine both for training and testing phases. However I encounter problems when I try to use the computed neural network parameters in my another project, where I need to compute all the matrix multiplications and stuff by myself. The problem is that I can't reproduce the behavior of the tf.nn.batch_normalization function:

feed_dict = {
    self.tf_x: np.array([range(self.x_cnt)]) / 100, 
    self.keep_prob: 1,
    self.phase_train: False
}

for i in range(len(self.z)):
    # print 0 layer's 1 value of arrays
    print(self.sess.run([
        self.z_prebn[i][0][1], # before bn
        self.bn[i][0][1],      # mean
        self.bn[i][1][1],      # var
        self.bn[i][2][1],      # offset
        self.bn[i][3][1],      # scale
        self.z[i][0][1],       # after bn
    ], feed_dict=feed_dict))
    # prints
    # [-0.077417567, -0.089603029, 0.000436493, -0.016652612, 1.0055743, 0.30664611]

According to the formula on the page https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/nn/batch_normalization:

bn = scale * (x - mean) / (sqrt(var) + 1e-3) + offset

But as we can see,

1.0055743 * (-0.077417567 - -0.089603029)/(0.000436493^0.5 + 1e-3) + -0.016652612
= 0.543057

Which differs from the value 0.30664611, computed by Tensorflow itself. So what am I doing wrong here and why I can't just calculate batch normalized value myself?

Thanks in advance!

1

1 Answers

2
votes

The formula used is slightly different from:

bn = scale * (x - mean) / (sqrt(var) + 1e-3) + offset

It should be:

bn = scale * (x - mean) / (sqrt(var + 1e-3)) + offset

The variance_epsilon variable is supposed to scale with the variance, not with sigma, which is the square-root of variance.

After the correction, the formula yields the correct value:

1.0055743 * (-0.077417567 - -0.089603029)/((0.000436493 + 1e-3)**0.5)  + -0.016652612
# 0.30664642276945747