3
votes

I am trying to implement a paper on Semantic Segmentation and I am confused about how to Upsample the prediction map produced by my segmentation network to match the input image size.

For example, I am using a variant of Resnet101 as the segmentation network (as used by the paper). With this network structure, an input of size 321x321 (again used in the paper) produces a final prediction map of size 41x41xC (C is the number of classes). Because I have to make pixel-level predictions, I need to upsample it to 321x321xC. Pytorch provides function to Upsample to an output size which is a multiple of the prediction map size. So, I can not directly use that method here.

Because this step is involved in every semantic segmentation network, I am sure there should be a standard way to implement this.

I would appreciate any pointers. Thanks in advance.

1

1 Answers

3
votes

Maybe the simpliest thing you can try is:

  • upsample 8 times. Then you 41x41 input turns into 328x328
  • perform center cropping to get your desired shape 321x321 (for instance, something like this input[3:,3:,:-4,:-4])