1
votes

I am trying to train a pre-trained model of inceptionv3 in keras with tensorflow backend.

I have a training dataset of 10,000 images of size (299,299) (they are gray scale). I converted it to the input shape for tensorflow as (10000,299,299,1). When I tried to fit the inceptionv3 model I am getting an error as

ValueError: Error when checking input: expected input_1 to have shape (None, none, none, 3) but got array with shape (10000, 299, 299, 1)

I tried to change the input shape of the tensor by using

input_tensor = Input(shape=(299,299,1))
base_model = InceptionV3(weights='imagenet',include_top=False,input_tensor = input_tensor)

I am getting the following error:

ValueError: Dimension 0 in both shapes must be equal, but are 3 and 32 for 'Assign_376' (op: 'Assign') with input shapes: [3,3,1,32], [32,3,3,3].

Could someone please help me solve this? My dataset is grayscale. and I am clueless how to input a grayscale dataset to inceptionv3 in keras using tensorflow backend.

1

1 Answers

2
votes

If you're using the pre-trained weights, you can't simply change the depth to 1 and expect it to work. This is because your first layer of convolutional filters have a depth of 3. You can refer to this post for some idea of what can be done.

What that error is saying is that your first layer weights after editing have a shape of [3, 3, 1, 32], which means a 3x3 convolution filter with depth 1 and 32 sets of these filters. However, the original model was [32, 3, 3, 3], which means a 3x3 convolution fliter with depth 3 and 32 sets of these filters. You have to solve 2 problems here:

  1. Change the depth to 3 and add another layer before this to deal with depth 1, or refer to the post I referenced above for some options to solve this.
  2. Ensure that the shapes of the weights are indexed in the same manner. There are some indexing issues from your ValueError.