Julia has the very nice feature of having access to its own syntactic tree, which makes it easy to generate new functions programatically, but it's much slower than the normal Julia code.
For example:
julia> timing = @time for i in [1:100] tan(pi/2*rand()); end
elapsed time: 1.513e-5 seconds (896 bytes allocated)
julia> timing = @time for i in [1:100] x = pi/2*rand(); eval(:(tan(x))); end
elapsed time: 0.0080231 seconds (23296 bytes allocated)
julia> timing = @time for i in [1:100] eval(:(tan(pi/2*rand()))); end
elapsed time: 0.017245327 seconds (90496 bytes allocated)
Is there a way to give to eval
the same speed as the normal Julia code?
EDIT:
I was able to slightly speed up eval using the precompile
function, but that still not enough:
julia> tmp3 = :(sin(x))
:(sin(x))
julia> timing = @time for i in [1:100000] x = pi/2*rand(); eval(tmp3); end
elapsed time: 8.651145772 seconds (13602336 bytes allocated)
julia> precompile(tmp3,(Float64,Float64))
julia> timing = @time for i in [1:100000] x = pi/2*rand(); eval(tmp3); end
elapsed time: 8.611654016 seconds (13600048 bytes allocated)
EDIT2:
@Ivarne suggested me to provide details on my project. Well, I would like to use the meta-programming capabilities of Julia to calculate the symbolic derivatives and run them.
I wrote a function derivative(ex::Expr,arg::Symbol)
that takes and expression and an argument, and returns a new expression that is the derivative of ex
with respect to arg
. Unfortunately, the resulting Expr
takes too long to evaluate.
EDIT3: as a conclusion, the performances using @eval
instead of eval
:
julia> timing = @time for i in [1:100000] x = pi/2*rand(); @eval(tmp3); end
elapsed time: 0.005821547 seconds (13600048 bytes allocated)
tmp3
is still :(sin(x))
myExpression() = compile(:(tan(pi/2*rand())))
ormyExpression(x) = compile(:(tan(x)))
to create at run time a compiled object? – S4M