0
votes

Using this Model:

def unet(input_shape=(256, 256, 256)):
  inputs = Input(input_shape)

  conv1 = Conv2D(64, (3, 3), padding='same')(inputs)
  #print(conv1.shape)
  conv1 = BatchNormalization()(conv1)
  conv1 = Activation('relu')(conv1)
  conv1 = Conv2D(64, (3, 3), padding='same')(conv1)
  conv1 = BatchNormalization()(conv1)
  conv1 = Activation('relu')(conv1)
  pool1 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv1)

  conv2 = Conv2D(128, (3, 3), padding='same')(pool1)
  conv2 = BatchNormalization()(conv2)
  conv2 = Activation('relu')(conv2)
  conv2 = Conv2D(128, (3, 3), padding='same')(conv2)
  conv2 = BatchNormalization()(conv2)
  conv2 = Activation('relu')(conv2)
  pool2 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv2)

  conv3 = Conv2D(256, (3, 3), padding='same')(pool2)
  conv3 = BatchNormalization()(conv3)
  conv3 = Activation('relu')(conv3)
  conv3 = Conv2D(256, (3, 3), padding='same')(conv3)
  conv3 = BatchNormalization()(conv3)
  conv3 = Activation('relu')(conv3)
  pool3 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv3)

  conv4 = Conv2D(512, (3, 3), padding='same')(pool3)
  conv4 = BatchNormalization()(conv4)
  conv4 = Activation('relu')(conv4)
  conv4 = Conv2D(512, (3, 3), padding='same')(conv4)
  conv4 = BatchNormalization()(conv4)
  conv4 = Activation('relu')(conv4)
  pool4 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv4)

  max = Conv2D(1024, (3, 3), padding='same')(pool4)
  max = BatchNormalization()(max)
  max = Activation('relu')(max)
  max = Conv2D(1024, (3, 3), padding='same')(max)
  max = BatchNormalization()(max)
  max = Activation('relu')(max)

  back4 = UpSampling2D((2, 2))(max)
  back4 = concatenate([conv4, back4], axis = 3)
  back4 = Conv2D(512, (3, 3), padding='same')(back4)
  back4 = BatchNormalization()(back4)
  back4 = Activation('relu')(back4)
  back4 = Conv2D(512, (3, 3), padding='same')(back4)
  back4 = BatchNormalization()(back4)
  back4 = Activation('relu')(back4)
  back4 = Conv2D(512, (3, 3), padding='same')(back4)
  back4 = BatchNormalization()(back4)
  back4 = Activation('relu')(back4)

  back3 = UpSampling2D((2, 2))(back4)
  back3 = concatenate([conv3, back3], axis = 3)
  back3 = Conv2D(256, (3, 3), padding='same')(back3)
  back3 = BatchNormalization()(back3)
  back3 = Activation('relu')(back3)
  back3 = Conv2D(256, (3, 3), padding='same')(back3)
  back3 = BatchNormalization()(back3)
  back3 = Activation('relu')(back3)
  back3 = Conv2D(256, (3, 3), padding='same')(back3)
  back3 = BatchNormalization()(back3)
  back3 = Activation('relu')(back3)

  back2 = UpSampling2D((2, 2))(back3)
  back2 = concatenate([conv2, back2], axis = 3)
  back2 = Conv2D(128, (3, 3), padding='same')(back2)
  back2 = BatchNormalization()(back2)
  back2 = Activation('relu')(back2)
  back2 = Conv2D(128, (3, 3), padding='same')(back2)
  back2 = BatchNormalization()(back2)
  back2 = Activation('relu')(back2)
  back2 = Conv2D(128, (3, 3), padding='same')(back2)
  back2 = BatchNormalization()(back2)
  back2 = Activation('relu')(back2)

  back1 = UpSampling2D((2, 2))(back2)
  back1 = concatenate([conv1, back1], axis = 3)
  back1 = Conv2D(64, (3, 3), padding='same')(back1)
  back1 = BatchNormalization()(back1)
  back1 = Activation('relu')(back1)
  back1 = Conv2D(64, (3, 3), padding='same')(back1)
  back1 = BatchNormalization()(back1)
  back1 = Activation('relu')(back1)
  back1 = Conv2D(64, (3, 3), padding='same')(back1)
  back1 = BatchNormalization()(back1)
  back1 = Activation('relu')(back1)

  outputs = Conv2D(1, (1, 1), activation='sigmoid')(back1)
  #np_from_tensor = tf.make_ndarray(outputs)
  #print(type(outputs))
  #output_tf_array.append(outputs)

  model = Model(inputs = [inputs], outputs = [outputs])
  model.summary()

  model.compile(optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999), loss='binary_crossentropy', metrics=['accuracy'])

  return model

My model has the following parameters passed to the Model.fit() method:

X_train (a 'List' object containing 4 numpy.ndarray objects with Shape(256, 256, 256))
y_train (a numpy.ndarray object with shape (4, 5))

When passed to Model.fit(X_train, y_train, steps_per_epoch=10, epochs=100), I get the following error:

ValueError: Data cardinality is ambiguous:
  x sizes: 256, 256, 256, 256
  y sizes: 4
Please provide data which shares the same first dimension.

I think the params to the Model.fit() method are correct because x sizes refers to all elements from the List and y sizes is how many elements there are in the List.

I have tried converting X_train to a numpy.ndarray type, but Model.fit() does not accept that type as the first param, and I don't think reshaping y_train to have Shape(20) is correct because y sizes still refers to how many numpy.ndarray objects there are in X_train 'List' object.

Are there any other means which could yield my Model.fit(X_train, y_train, steps_per_epoch=10, epochs=100) to not raise the ValueError?

Update:

After resolving the above error, I get another error saying :

ValueError: logits and labels must have the same shape ((1, 256, 256, 1) vs (1, 5))

Somehow, I don't believe reshaping could help per se, so how might I otherwise 'match' the labels?

Thanks.

1
X_train should not be a list but a numpy array of shape, it seems, (4,256,256,256). Try this X_train = np.array(X_train) before passing it to fit - piterbarg

1 Answers

0
votes

Your model has only one input, so convert your list to np.ndarray:

input = np.asarray(input)

And your model expecting shape (4, 256, 256, 1) as labels. It doesn't tie to your label shape (4, 5).