3
votes

I have two trained models (model_A and model_B), and both of them have dropout layers. I have freezed model_A and model_B and merged them with a new dense layer to get model_AB (but I have not removed model_A's and model_B's dropout layers). model_AB's weights will be non-trainable, except for the added dense layer.

Now my question is: are the dropout layers in model_A and model_B active (i.e. drop neurons) when I am training model_AB?

1

1 Answers

9
votes

Short answer: The dropout layers will continue dropping neurons during training, even if you set their trainable property to False.

Long answer: There are two distinct notions in Keras:

  • Updating the weights and states of a layer: this is controlled using trainable property of that layer, i.e. if you set layer.trainable = False then the weights and internal states of the layer would not be updated.

  • Behavior of a layer in training and testing phases: as you know a layer, like dropout, may have a different behavior during training and testing phases. Learning phase in Keras is set using keras.backend.set_learning_phase(). For example, when you call model.fit(...) the learning phase is automatically set to 1 (i.e. training), whereas when you use model.predict(...) it will be automatically set to 0 (i.e. test). Further, note that learning phase of 1 (i.e. training) does not necessarily imply updating the weighs/states of a layer. You can run your model with a learning phase of 1 (i.e. training phase), but no weights will be updated; just layers will switch to their training behavior (see this answer for more information). Further, there is another way to set learning phase for each individual layer by passing training=True argument when calling a layer on a tensor (see this answer for more information).

So according to the above points, when you set trainable=False on a dropout layer and use that in training mode (e.g. either by calling model.fit(...), or manually setting learning phase to training like example below), the neurons would still be dropped by the dropout layer.

Here is a reproducible example which illustrates this point:

from keras import layers
from keras import models
from keras import backend as K
import numpy as np

inp = layers.Input(shape=(10,))
out = layers.Dropout(0.5)(inp)

model = models.Model(inp, out)
model.layers[-1].trainable = False  # set dropout layer as non-trainable
model.compile(optimizer='adam', loss='mse') # IMPORTANT: we must always compile model after changing `trainable` attribute

# create a custom backend function so that we can control the learning phase
func = K.function(model.inputs + [K.learning_phase()], model.outputs)

x = np.ones((1,10))
# learning phase = 1, i.e. training mode
print(func([x, 1]))
# the output will be:
[array([[2., 2., 2., 0., 0., 2., 2., 2., 0., 0.]], dtype=float32)]
# as you can see some of the neurons have been dropped

# now set learning phase = 0, i.e test mode
print(func([x, 0]))
# the output will be:
[array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], dtype=float32)]
# unsurprisingly, no neurons have been dropped in test phase