0
votes

When feeding an image to a pretrained InceptionResNetV2 network, I have the following results.

from keras.applications.inception_resnet_v2 import InceptionResNetV2

INPUT_SHAPE = (200, 250, 3)

img = load_img() # loads a 200x250 rgb image into a (200, 250, 3) numpy array
assert img.shape == INPUT_SHAPE # just fine

model = InceptionResNetV2(include_top=False, input_shape=INPUT_SHAPE)

model.predict(img)

ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (200, 150, 3)

I don't understand why and how the model expects a 4 dimension input. What must be done to adapt the (200, 250, 3) image so that it can be processed by the model?

2

2 Answers

1
votes

try reshape your input with shapes (1, 200, 150, 3) or (200, 150, 3, 1).

You can use image = np.expand_dims(image, axis=0)) or image = input_data.reshape((-1, image_side1, image_side2, channels))

1
votes

You need to feed a batch of images. If your batch has one image, it should also have the same format.

try img.reshape((1, 200, 150, 3))