1
votes

I am trying to build deconvolution network using tensorflow.

here is my code.

def decoder(self, activations):
    with tf.variable_scope("Decoder") as scope:

        h0 = conv2d(activations, 128, name = "d_h0_conv_1")
        h0 = lrelu(h0)
        shape = activations.get_shape().as_list()
        h0 = deconv2d(h0, [shape[0], 2 * shape[1], 2 * shape[2], 128], name = "d_h0_deconv_1") 
        h0 = lrelu(h0)

        h1 = conv2d(h0, 128, name = "d_h1_conv_1")
        h1 = lrelu(h1)
        h1 = conv2d(h1, 64, name = "d_h1_conv_2")
        h1 = lrelu(h1)
        shape = h1.get_shape().as_list()
        h1 = deconv2d(h1, [shape[0], 2 * shape[1], 2 * shape[2], 64], name = "d_h1_deconv_1") 
        h1 = lrelu(h1)

        h2 = conv2d(h1, 64, name = "d_h2_conv_1")
        h2 = lrelu(h2)
        h2 = conv2d(h2, 3, name = "d_h2_conv_2")

        output = h2
        print shape


    return output

the parameter activation is basically activation from VGG19 network.

Here is the deconv2d() function

def deconv2d(input_, output_shape,
         k_h=3, k_w=3, d_h=1, d_w=1, stddev=0.02,
         name="deconv2d", with_w=False):
with tf.variable_scope(name):
    # filter : [height, width, output_channels, in_channels]
    w = tf.get_variable('w', [k_h, k_w, output_shape[-1], input_.get_shape()[-1]],
                        initializer=tf.contrib.layers.variance_scaling_initializer())

    deconv = tf.nn.conv2d_transpose(input_, w, output_shape=output_shape,
                        strides=[1, d_h, d_w, 1], padding='SAME')


    biases = tf.get_variable('biases', [output_shape[-1]], initializer=tf.constant_initializer(0.0))
    deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())

    return deconv

and this is loss

with tf.name_scope("total_loss"):
        self.loss = tf.nn.l2_loss(self.output - self.images)

It does not produce output shape compatible error. However, with optimization,

 with tf.variable_scope("Optimizer"):
        optimizer = tf.train.AdamOptimizer(config.learning_rate)
        grad_and_vars = optimizer.compute_gradients(self.loss, var_list = self.d_vars)
        self.d_optim = optimizer.apply_gradients(grad_and_vars)

The tensorflow produces the error,

Traceback (most recent call last):
File "main.py", line 74, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-   packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 59, in main
dcgan.train(FLAGS)
File "/home/junyonglee/workspace/bi_sim/sumGAN/model.py", line 121, in train
grad_and_vars = optimizer.compute_gradients(self.loss, var_list = self.d_vars)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 354, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 500, in gradients
in_grad.set_shape(t_in.get_shape())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 425, in set_shape
self._shape = self._shape.merge_with(shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 585, in merge_with
(self, other))
ValueError: Shapes (30, 256, 256, 64) and (30, 128, 128, 64) are not compatible

The output size of the decoder is (30, 256, 256 3) where 30 is the batch size.

It looks like at layer "d_h1_deconv_1", the global gradient (gradient flow into the op unit) is shape of (30, 256, 256, 64) where the local gradient (gradient wrt the inputs) is shape of (30, 128, 128, 64), which is very obvious fact that it is doing transposed convolution.

Does anyone know how to properly backprop using conv2d_transpose()? Thank you!

1

1 Answers

2
votes

Can you show us your deconv2d function? Without it I can't offer you a lot of advise.

Here are two ways I implemented such a deconvolution function:

def transpose_deconvolution_layer(input_tensor,used_weights,new_shape,stride,scope_name):
  with tf.variable_scope(scope_name):
    output = tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape,strides=[1,stride,stride,1], padding='SAME')
    output = tf.nn.relu(output)
    return output


def resize_deconvolution_layer(input_tensor,used_weights,new_shape,stride,scope_name):
    with tf.variable_scope(scope_name):
        output = tf.image.resize_images(input_tensor,(new_shape[1],new_shape[2]))#tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape,strides=[1,stride,stride,1], padding='SAME')
        output, unused_weights = conv_layer(output,3,new_shape[3]*2,new_shape[3],1,scope_name+"_awesome_deconv")
        return output

Please test if this works. If you want to know more about why I programmed two, check this article: http://www.pinchofintelligence.com/photorealistic-neural-network-gameboy/ and this article: http://distill.pub/2016/deconv-checkerboard/

Let me know if this helped!

Kind regards