1
votes

My goal is to do grey scale image segmentation using pixelwise classification. So I have two labels 0 and 1. I made a network in pytorch which looks like the following.

class Net(nn.Module):

def __init__(self):
    super(Net, self).__init__()

    self.up = nn.Upsample(scale_factor=2, mode='nearest')

    self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1)
    self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
    self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1)  



def forward(self, x):
    in_size = x.size(0)

    x = F.relu(self.conv11(x))
    x = F.relu(self.conv12(x))
    x = F.relu(self.conv13(x))

    x = F.softmax(x, 2)

    return x

In the last layer I designed the conv13 in such that it produces 2 channels one for each class.

Since I was using the softmax I was expecting that summation of value of same index on 2 separate channel would equal to 1.

For example assume the output image is ( 2{channel}, 4, 4). So I was expecting that

image[ channel 1 ][0][0] + image[ channel 2 ][0][0] = 1

But the output I get is 0.0015 which is not even close to 1. How can i use the softmax to predict channelwise ?

To check this I used the following code

for batch, data in enumerate(trainloader, 0):
    inputs , labels = data
    inputs, labels = inputs.to(device), labels.to(device)


    optimizer.zero_grad()
    outputs = net(inputs)
    loss = rmse(outputs, labels)
    loss.backward()
    optimizer.step()
    running_loss += loss.item()


    predicted = outputs.data
    predicted = predicted.to('cpu')
    predicted_img = predicted.numpy()

    predicted_img = np.reshape(predicted_img,(2, 4, 4))

    print(predicted_img[0])
    print(predicted_img[1])

Those prints showed this

[[**0.2762002** 0.13305853 0.2510342 0.23114938] [0.26812425 0.28500515 0.05682982 0.15851443] [0.1640967 0.5409352 0.43547812 0.44782472] [0.29157883 0.0410011 0.2566578 0.16251141]]

[[**0.23052207** 0.868455 0.43436486 0.0684725 ] [0.18001427 0.02341573 0.0727293 0.2525512 ] [0.06587404 0.04974682 0.3773188 0.6559266 ] [0.5235896 0.05838248 0.11558701 0.02304965]]

It is clear that the corresponding elements are not summing up to 1 like

0.2762002 (index 0, 0) + 0.23052207 (index 0, 0) != 1

How can I fix it ?

1
Why have both tensorflow and pytorch tags? - nuric
edited ! It seems stackoverflow automatically suggested the tag tensorflow and I prematurely submitted the question without removing it ! - Farshid Rayhan

1 Answers

2
votes

Please check last line of my code .. basically your dimension for softmax was wrong.

class Net(nn.Module):

  def __init__(self):
        super(Net, self).__init__()

        self.up = nn.Upsample(scale_factor=2, mode='nearest')

        self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1)
        self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
        self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1)  



    def forward(self, x):
        in_size = x.size(0)

        x = F.relu(self.conv11(x))
        x = F.relu(self.conv12(x))
        x = F.relu(self.conv13(x))

        x = F.softmax(x, 1) #this line is changed

        return x

net = Net()
inputs = torch.rand(1,1,4,4)
out = net (Variable(inputs))
print (out)
out.sum(dim=1)

Hope that helps.