1
votes

I'm trying to solve classification problem. I don't know why I'm getting this error:

ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

This is the main code:

model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here
def createModel():
    input_shape=(1,11, 3840)
    model = Sequential()
    #C1
    model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
    model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),data_format= "channels_first",  padding='same'))
    model.add(BatchNormalization())
    #C2
    model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    #c3
    model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))
    opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
    
    return model

Error:

    history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
    return self.fit(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 696, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
        return step_function(self, iterator)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
        return fn(*args, **kwargs)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
        outputs = model.train_step(data)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:747 train_step
        y_pred = self(x, training=True)
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs,
    /home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:191 assert_input_compatibility
        raise ValueError('Input ' + str(input_index) + ' of layer ' +

    ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

1
What line of code triggered that exception? Is this line somewhere in your provided code?Arty
Can you maybe put full exception? It has no beginning part (first lines are skipped). Hence we can't see what line of code triggered exception.Arty
What is the shape of your numpy arrays generated by function generate_arrays_for_training()?Arty
@Arty yes , this is the line history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here , shape of numpy array generated by generate_arrays_for_training() (1,11,3840)Edayildiz
@Edaildiz I understood the reason, I'll write an answer now.Arty

1 Answers

1
votes

Keras actually always hides 0-th dimension, also known as batch dimension. Everywhere where you put input_shape = (A, B, C) actually batch dimension should not be mentioned there, (A, B, C) should be a shape of one object (or image in your case). For example if you say input_shape = (1,11, 3840) then it actually means that data for training or prediction should be a numpy array of shape something like (7, 1,11, 3840), i.e. training has 7 objects in the batch. So this 7 is size of batch, number of objects to train in parallel.

So if your one object (e.g. image) is of shape (11, 3840) then you have to write input_shape = (11, 3840) everywhere, without mentioning batch size.

Why is Keras hiding 0-th batch dimension? Because keras is expecting different size of batch, today you may provide 7 objects, tomorrow 9 and same network will work for both. But shape of one object which is (11, 3840) should never change and provided data for training generated by function generate_arrays_for_training() should be always of size (BatchSize, 11, 3840), where BatchSize can vary, you may generate batch of 1 or 7 or 9 objects-images each of shape (11, 3840).

If your images for all layers should be 3-dimensional, with 1 channel then you have to expand dims of your generated training data, do X = np.expand_dims(X, 0) using this function so that your training X data is of shape (1, 1, 11, 3840), e.g. batch with 1 object, only then you can have input_shape = (1, 11, 3840).

Also I see that you're writing everywhere data_format= "channels_first", by default all functions are channels_last, in order not to write this everywhere you can reshape your generated by generate_arrays_for_training() data just once, if it is X of shape (1, 1, 11, 3840) then you do X = X.transpose(0, 2, 3, 1). And you channels will become last dimension.

Transposing moves one dimension to another place. But for your case as you have just 1 channel than instead of transposing you can just reshape e.g. X of shape (1, 1, 11, 3840) can be reshaped by X = X.reshape(1, 11, 3840, 1) and it will become of shape (1, 11, 3840, 1). This is only needed if you want not to write "channels_first" everywhere, but if you don't want to beautify your code than transposing/reshaping is not needed at all!

As I remember from my past Keras somehow doesn't like dimensions of size 1, it basically tries to remove them in several different functions, i.e. if keras sees array of shape (1, 2, 1, 3, 1, 4) it almost always tries to reshape it to be (2, 3, 4). Thus np.expand_dims() gets actually ignored. Probably the only solution in that case would be to generate batch of size 2 images at least.

You may also read my long answer, although it is a bit unrelated, it may help you to understand how training/prediction in Keras works, especially you may read last paragraphs, numbered 1-12.

Update: As appeared to be the problem was solved due to help of next modifications:

  1. In data generating function it was needed to do expanding dims twice, i.e. X = np.expand_dims(np.expand_dims(X, 0), 0).

  2. In data generating function another was needed X = X.transpose(0, 2, 3, 1).

  3. In code of network input shape set to be input_shape = (11, 3840, 1).

  4. In code of network all substrings data_format = "channels_first" were removed.