For reasons of performance, I need gradients and Hessians that perform as fast as user-defined functions ( the ForwardDiff library, for example, makes my code significantly slower). I then tried metaprogramming using the @generated
macro, testing with a simple function
using Calculus
hand_defined_derivative(x) = 2x - sin(x)
symbolic_primal = :( x^2 + cos(x) )
symbolic_derivative = differentiate(symbolic_primal,:x)
@generated functional_derivative(x) = symbolic_derivative
This gave me exactly what I wanted:
rand_x = rand(10000);
exact_values = hand_defined_derivative.(rand_x)
test_values = functional_derivative.(rand_x)
isequal(exact_values,test_values) # >> true
@btime hand_defined_derivative.(rand_x); # >> 73.358 μs (5 allocations: 78.27 KiB)
@btime functional_derivative.(rand_x); # >> 73.456 μs (5 allocations: 78.27 KiB)
I now need to generalize this to functions with more arguments. The obvious extrapolation is:
symbolic_primal = :( x^2 + cos(x) + y^2 )
symbolic_gradient = differentiate(symbolic_primal,[:x,:y])
The symbolic_gradient
behaves as expected (just as in the 1-dimensional case), but the @generated macro does not respond to multiple dimensions as I believed it would:
@generated functional_gradient(x,y) = symbolic_gradient
functional_gradient(1.0,1.0)
>> 2-element Array{Any,1}:
:(2 * 1 * x ^ (2 - 1) + 1 * -(sin(x)))
:(2 * 1 * y ^ (2 - 1))
That is, it doesn't transform the symbols into generated functions. Is there an easy way to solve this?
P.S.: I know I could define the derivatives with respect to each argument as one-dimensional functions and bundle them together into a gradient (that's what I'm currently doing), but I'm sure there must be a better way.