Consider the following architecture for a CNN, (code fragment was referred from this link)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
My questions are basically about the training process of a CNN.
- When you train the model, do the outputs of the Flatten layer change during the epochs?
- If the outputs (of Flatten layer) change, does that mean there is a backpropagation process before the Flatten layer (between, Conv2d->Conv2D->MaxPooling2D->Flatten) as well?
- What is the necessity of using a Dropout after the MaxPooling2D layer (or any layer before flatten)?