2
votes

Here's my convolution neural network

input_shape = (28,28.1)
class cnn_model(tf.keras.Model):
    def __init__(self):

        super(cnn_model,self).__init__()
        self.conv1 = layers.Conv2D(32,(3,3),activation='relu',input_shape= input_shape)
        self.maxpool = layers.MaxPool2D((2,2))
        self.conv2 = layers.Conv2D(64,(3,3),activation ='relu')
        self.conv3 = layers.Conv2D(64,(3,3),activation='relu')
        self.flatten = layers.Flatten()
        self.dense64 = layers.Dense(64,activation='relu')
        self.dense10 = layers.Dense(10,activation='relu')
    def call(self,inputs):
        x = self.conv1(inputs)
        x = self.maxpool(x)
        x = self.conv2(x)
        x = self.maxpool(x)
        x = self.conv3(x)
        x = self.flatten(x)
        x = self.dense64(x)
        x = self.dense10(x)
        return x

I am getting the following error

model = cnn_model() print(model.call(train_data[0])) ValueError: Input 0 of layer conv2d_6 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [28, 28, 1]

and the shape is (28, 28, 1).

What is wrong?

2

2 Answers

1
votes

Your input_shape parameter looks fine, so I'm guessing train_data[0] doesn't have enough dimensions! Probably train_data.shape is something like (N, H, W, C), which is ready to go into the model. However, train_data[0].shape will come out like (H, W, C) which has one less dimension than expected. If you want to feed a single sample to the model you would have to reshape train_data[0] to (1, H, W, C), perhaps by using NumPy's expand_dims.

0
votes

From your code snip,

input_shape = (28,28.1)

Is there any typo - . instead of , ? did you intend to write it as below ?

input_shape = (28, 28, 1)