0
votes

i am working with resnet to train my data. I have frozen most of the layers and only working training with the last 4 layers. I want to change these last four layer dimension so that it matches my input dimension and channels. As i am new to this i dont know how to do it. I tried googling it but cannot find the solution

base_model = tf.keras.applications.ResNet50(
    include_top=False,
    weights="imagenet",
    input_tensor=3,
    input_shape=(150,150),
    pooling=None,  
)
for layer in base_model.layers[:46]:
    layer.trainable = False
1
what do you mean the last layer dimension should match your input dimension?? - M.Innat
what i meant is that my dimension of my input images is different from what the dimension of images resnet is trained on. And i am working only with the last few layers. So the dimension of these layers is also different or am i wrong?. - user123
you don't need to worry about that, just load the pretrained weights, unfreeze the last four-layer, and training / fine-tune the model with your custom input. - M.Innat
Neural Networks can only work with the same input size as they were trained on, so you have to preprocess the image before feeding your them as an input into the neural network for prediction. You can use tf.keras.applications.resnet.preprocess_input here for preprocessing the image and then use that as an input to your network. - Kishore

1 Answers

0
votes

If you want to change last layers architecture, you should get output of the desired intermediate layer and connect it to yours.

I assume that you want to change the architecture after the 46th layer.

First define pre-trained model:

base_model = tf.keras.applications.ResNet50(
    include_top=False,
    weights="imagenet",
    input_shape=(150,150,3), 
)
for layer in base_model.layers[:46]:
    layer.trainable = False

Then, get the name of intermediate layer you want (in this case 46th layer):

print(base_model.layers[46].name)

For me, The output is conv3_block1_3_conv

Then get the output of this layer and connect to your own layers:

last_layer = base_model.get_layer('conv3_block1_3_conv') #get the layer
last_output = last_layer.output                          #get the layer output
x = tf.keras.layers.Flatten()(last_output)               #flatten the output
x = tf.keras.layers.Dense(1024, activation='relu')(x)    #add your own layers
x = tf.keras.layers.Dense(1, activation='sigmoid')(x)    #add your own output
model = tf.keras.Model(base_model.input, x)              #create the new model