1
votes

I was trying to slice or split a tensor of input image of shape[batch_size,256,256,3] into three separate variable of shape [batch_size,256,256,1] where each variable contains exactly one channel of the input image , i.e

R_channel=image[-1,256,256,0]

G_channel=image[-1,256,256,1]

B_channel=image[-1,256,256,2]

i have tried the following code for the above:

imgs, label = iterator.get_next()

channel_r=imgs[-1,:,:,0]
channel_g=imgs[-1,:,:,1]
channel_b=imgs[-1,:,:,2]
NN_network(imgs,channel_r,channel_g,channel_b)
...
def NN_network(imgs,c_r,c_g,c_b):
    conv1=tf.layers.conv2d(imgs,n_filter,kernel_Size,...)
    conv2=tf.layers.conv2d(c_r,n_filter,kernel_Size,...)
    conv3=tf.layers.conv2d(c_g,n_filter,kernel_Size,...)
    conv4=tf.layers.conv2d(c_b,n_filter,kernel_Size,...)
    concat_layer=tf.concat(axis=3,values=[imgs,c_r,c_g,c_b])

in tensorflow but i get the following error:

InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [16,12,256,256] vs. shape[1] = [1,12,256,256] [[node LC_Nikon1/concat (defined at ) = ConcatV2[N=4, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](/conv2d/BiasAdd, /conv2d_1/BiasAdd, /conv2d_2/BiasAdd, /conv2d_3/BiasAdd, gradients/resize_image_with_crop_or_pad_1/crop_to_bounding_box/Slice_grad/concat/axis)]] [[{{node resize_image_with_crop_or_pad/pad_to_bounding_box/GreaterEqual_3/_83}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_384_resize_image_with_crop_or_pad/pad_to_bounding_box/GreaterEqual_3", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

How can i do the follwoing concat, what is this error for?

1

1 Answers

1
votes

Runnable code:

import tensorflow as tf
import numpy as np

# imgs is supposed to be a tensor
# tf.random.normal available in TF 1.12
imgs = tf.random.normal((16, 256, 256, 3))

channel_r=imgs[:,:,:,0:1]
channel_g=imgs[:,:,:,1:2]
channel_b=imgs[:,:,:,2:3]

n_filter = 16
kernel_Size = 3

def NN_network(imgs,c_r,c_g,c_b):
    conv1=tf.layers.conv2d(imgs,n_filter,kernel_Size)
    conv2=tf.layers.conv2d(c_r,n_filter,kernel_Size)
    conv3=tf.layers.conv2d(c_g,n_filter,kernel_Size)
    conv4=tf.layers.conv2d(c_b,n_filter,kernel_Size)
    concat_layer=tf.concat(axis=3,values=[imgs,c_r,c_g,c_b])

NN_network(imgs,channel_r,channel_g,channel_b)

Your error message is saying that shape[0](i.e. imgs's shape) equals to [16,12,256,256], but shape[1](i.e. c_r's shape) equals to [1,12,256,256](their 0th dimension don't match).

That is because you set channel_r=imgs[-1,:,:,0], whose 0th dimension doesn't match that of imgs.

I've changed channel_r=imgs[-1,:,:,0] to channel_r=imgs[:,:,:,0:1] so that the resulting channel_r would have 4 dimensions and is only different with `imgsat 3rd dimension.