0
votes

I have a model in Keras as follows:

data = Input(shape=input_shape)

# 512 x 640 x 3
pad1 = TimeDistributed(ZeroPadding2D(padding=(100, 100)))(data)

# 712 x 840 x 3
conv1_1 = TimeDistributed(Conv2D(8, (3,3), padding="valid", activation="relu", name="block1_conv1", data_format="channels_last"))(pad1)
conv1_2 = TimeDistributed(Conv2D(8, (3,3), padding="same", activation="relu", name="block1_conv2", data_format="channels_last"))(conv1_1)
pool1   = TimeDistributed(MaxPooling2D((2,2), strides=(2,2), padding="same", name="block1_pool", data_format="channels_last"))(conv1_2)

I want to be able to set the trainable weight parameters of conv1_1 and conv1_2 to a pre-trained value for every time step. Can I do this? Keras seems to treat these layers as their own entities with their own trainable parameters and not as a collection of Conv2D functions with the same shared trainable weights. Is there a way to change this? How can I access the trainable weights for a single time slice and distribute that to all time slices?

1

1 Answers

1
votes

You can just do it like this:

data = Input(shape=input_shape)

# 512 x 640 x 3
pad1 = TimeDistributed(ZeroPadding2D(padding=(100, 100)))(data)

# 712 x 840 x 3
nd_conv1_1 = Conv2D(8, (3,3), padding="valid", activation="relu", name="block1_conv1", data_format="channels_last")
nd_conv1_2 = Conv2D(8, (3,3), padding="same", activation="relu", name="block1_conv2", data_format="channels_last")
conv1_1 = TimeDistributed(nd_conv1_1)(pad1)
conv1_2 = TimeDistributed(nd_conv1_2)(conv1_1)
pool1   = TimeDistributed(MaxPooling2D((2,2), strides=(2,2), padding="same", name="block1_pool", data_format="channels_last"))(conv1_2)

nd_conv1_1.trainable = True/False
nd_conv1_2.trainable = True/False