If you set padding = "valid"
(default behavior), meaning that the automatic dimensionality reduction occurs during convolution/maxpooling and you will get negative dimensions. To make sure you get the same dimensionality after performing convolution/maxpooling as you need to set padding=same
while specifying Conv3D and MaxPooling3D layers.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv3D, MaxPooling3D, BatchNormalization
import numpy as np
input_shape=(1, 22, 5, 3844)
model = Sequential()
model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
model.add(MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
model.add(Conv3D(32, (1, 3, 3), strides=(1, 1, 1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
model.add(Conv3D(64, (1, 3, 3), strides=(1, 1, 1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(MaxPooling3D(pool_size=(1, 2, 2), data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(256, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
opt_adam = tf.keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
print(model.summary())
Output:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv3d (Conv3D) (None, 16, 22, 3, 1922) 8816
_________________________________________________________________
max_pooling3d (MaxPooling3D) (None, 16, 22, 2, 961) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 16, 22, 2, 961) 3844
_________________________________________________________________
conv3d_1 (Conv3D) (None, 32, 22, 2, 961) 4640
_________________________________________________________________
max_pooling3d_1 (MaxPooling3 (None, 32, 22, 1, 481) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 32, 22, 1, 481) 1924
_________________________________________________________________
conv3d_2 (Conv3D) (None, 64, 22, 1, 481) 18496
_________________________________________________________________
max_pooling3d_2 (MaxPooling3 (None, 64, 22, 1, 241) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 64, 22, 1, 241) 964
_________________________________________________________________
flatten (Flatten) (None, 339328) 0
_________________________________________________________________
dropout (Dropout) (None, 339328) 0
_________________________________________________________________
dense (Dense) (None, 256) 86868224
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 2) 514
=================================================================
Total params: 86,907,422
Trainable params: 86,904,056
Non-trainable params: 3,366
_________________________________________________________________