Does it call forward() in nn.Module? I thought when we call the model, forward method is being used.
Why do we need to specify train()?
6 Answers
model.train() tells your model that you are training the model. So effectively layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordingly.
More details:
It sets the mode to train
(see source code). You can call either model.eval() or model.train(mode=False) to tell that you are testing.
It is somewhat intuitive to expect train function to train model but it does not do that. It just sets the mode.
Here is the code of module.train():
def train(self, mode=True):
r"""Sets the module in training mode."""
self.training = mode
for module in self.children():
module.train(mode)
return self
And here is the module.eval.
def eval(self):
r"""Sets the module in evaluation mode."""
return self.train(False)
Modes train and eval are the only two modes we can set the module in, and they are exactly opposite.
That's just a self.training flag and currently only Dropout and BatchNorm care about that flag.
By default, this flag is set to True.
There are two ways of letting the model know your intention i.e do you want to train the model or do you want to use the model to evaluate.
In case of model.train() the model knows it has to learn the layers and when we use model.eval() it indicates the model that nothing new is to be learnt and the model is used for testing.
model.eval() is also necessary because in pytorch if we are using batchnorm and during test if we want to just pass a single image, pytorch throws an error if model.eval() is not specified.
model.train() |
model.eval() |
|---|---|
| Sets your model in training mode i.e. • BatchNorm layers use per-batch statistics • Dropout layers activated etc |
Sets your model in evaluation (inference) mode i.e. • BatchNorm layers use running statistics • Dropout layers de-activated etc. Equivalent to model.train(False). |
Note: neither of these function calls run forward / backward passes. They tell the model how to act when run.
This is important as some modules (layers) (e.g. Dropout, BatchNorm) are designed to behave differently during training vs inference, and hence the model will produce unexpected results if run in the wrong mode.
The current official documentation states the following:
This has any [sic] effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.
Consider the following model
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
class GraphNet(torch.nn.Module):
def __init__(self, num_node_features, num_classes):
super(GraphNet, self).__init__()
self.conv1 = GCNConv(num_node_features, 16)
self.conv2 = GCNConv(16, num_classes)
def forward(self, data):
x, edge_index = data.x, data.edge_index
x = self.conv1(x, edge_index)
x = F.dropout(x, training=self.training) #Look here
x = self.conv2(x, edge_index)
return F.log_softmax(x, dim=1)
Here, the functioning of dropout differ in different modes of operation. As you can see, it works only when self.training==True. So, when you type model.train(), the model's forward function will perform dropout otherwise it will not (say when model.eval() or model.train(mode=False)).
mdl.is_eval()? - Charlie Parkerself.trainingormodel.trainingis exactly what I was looking for! :) - Charlie Parker