I am following with Datacamp's tutorial on using convolutional autoencoders for classification here. I understand in the tutorial that we only need the autoencoder's head (i.e. the encoder part) stacked to a fully-connected layer to do the classification.
After stacking, the resulting network (convolutional-autoencoder) is trained twice. The first by setting the encoder's weights to false as:
for layer in full_model.layers[0:19]:
layer.trainable = False
And then setting back to true, and re-trained the network:
for layer in full_model.layers[0:19]:
layer.trainable = True
I cannot understand why we are doing this twice. Anyone with experience working with conv-net or autoencoders?