The reason why I am asking this question is due to the fact that in VGG19 there are batch-normalization layers (unlike VGG16, for example).
I am trying to train a Faster-RCNN network in Caffe. I am doing it by:
- Downloading a VGG19 ImageNet pretrained model (weights + prototxt file)
- Removing the fully-connected layers from the prototxt file
- Adding the RPN and the Fast-RCNN layers on top of the VGG19 backbone convolutional layers
I did not change anything regarding the lr_mult
values of the convolutional layers. In the prototxt file, the convolutional layers (like conv1_1
, etc. have non-zero lr_mult
values, while the batch normalization layers' lr_mult
values are set to 0 (layers named like conv1_1/bn
).
Does the fact that the batch normalization layers are frozen means that the convolutional layers are frozen as well? Or should I set lr_mult
to 0 also in the layers named convX_X
?
Update: After running another training process while zeroing the lr_mult
of all the convolutional layers, the training time reduced dramatically, which implies that the answer is that the lr_mult
value needs to be set to 0 also in the convX_X
layers.