A pytorch question, regarding backward(). In the pytorch blitz tutorial copied and pasted below, they pass in a vector [0.1, 1.0, 0.0001] to backward() . I can intuitively guess why vector [0.1, 1.0, 0.0001] shape passed in is [3] , but I do not understand where the values 0.1, 1.0, 0.0001 come from. Another tutorial I looked at passes in one such that backwards on a vector is done like this : L.backward(torch.ones(L.shape))
# copied from blitz tutorial
Now in this case y is no longer a scalar. torch.autograd could not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to backward as argument:
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
If anyone can explain the reasoning for [0.1, 1.0, 0.0001], I would appreciate it.