0
votes

I have trained a model with frozen feature extraction layers which was initialised as followed:

model = models.densenet161(pretrained=True)
for param in model.parameters():
    param.requires_grad = False
num_ftrs = model.classifier.in_features
model.classifier = torch.nn.Linear(num_ftrs,2)

However, at inference, I am unsure of how to load the model correctly. In a separate script I do the following:

model = models.densenet161(pretrained=True)
for param in model.parameters():
    param.requires_grad = False
num_ftrs = model.classifier.in_features
model.classifier = torch.nn.Linear(num_ftrs,2)
model.to(device)

# load the best model
bestmodel = get_best_model(best)
bestmodel = torch.load(bestmodel)
model.load_state_dict(bestmodel['classifier'])

# set model to evaluation mode
model.eval()
with torch.no_grad():

Does this look correct? Or do I set pretrained=False when introducing the model in my inference script?