0
votes

This is confusing me quite a bit. I can get second derivatives without any problem if the inputs are scalar. But when it's a vector, PyTorch fails with: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn - I already specified that I want the derivatives with respect to a and requires_grad=True which from the documentation, seems to be correct. I made the function very simple to replicate the error. We have a function with inputs a and b that get multiplied and summed. Both a and b are arrays of length 3 here. The error arises with the call to first_derivative.backward() Any PyTorch veterans please respond with potential solutions, I greatly appreciate it!

import torch
import numpy as np

def function(a, b):
    payoff = torch.sum(torch.multiply(a, b))
    return payoff

def derivative(a, b):
    a = torch.tensor(a, requires_grad=True)
    b = torch.tensor(b, requires_grad=False)
    value = function(a, b)
    (first_derivative,) = torch.autograd.grad(value, a, create_graph=True)
    first_derivative.backward()
    second_derivative = a.grad
    return value, first_derivative, second_derivative

a = np.array([120.0,100.0,80.0])
b = np.array([0.1,0.2,0.3])

value, first_derivative, second_derivative = derivative(a, b)