0
votes

I have a PyTorch network that predicts the location of devices using Wi-Fi RSS data. So the output layer contains two neurons corresponding to x and y coordinates. I want to use mean localization error as the loss function.

ie. loss = mean(sqrt((x_predicted - X_real)^2 + (y_predicted - y_real)^2))

The equation finds the error distance between predicted and real locations. How can I include this instead of MSE?

1

1 Answers

0
votes

As you can see in the tutorial, just implement a criterion function(you can name it however you like) and use that:

def custom_loss(output, label):
   return torch.mean(torch.sqrt(torch.sum((output - label)**2))) 

and the in the code(stolen from the linked tutorial):

for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = custom_loss(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

HTH