I am trying to pass the output from the last Convolutional Layer to FCC layer but I am struggling with dimentions. As a default, the network uses AdaptiveAvgPool2d(output_size=(6, 6)) what does not let me use torch.use_deterministic_algorithms(True) for reproducibility purpose. This is the error I am getting:
*mat1 dim 1 must match mat2 dim 0*
(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
(classifier): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Linear(in_features=9216, out_features=4096, bias=True)
The input tensor is: [10, 3, 350, 350]. The shape of tensor from last Conv2d/MaxPool2d layer is: torch.Size([10, 256, 9, 9]). I assume that number of inputs for FCC should be 256 x 9 x 9 = 20736 but it does not work as well.
Here is also my class for forwarding the output from CONV to FCC layer:
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
print('SHAPE', np.shape(x))
return x
The idea has been taken from video: https://www.youtube.com/watch?v=qaDe0qQZ5AQ&t=301s. Thank you so much in advance.