I tried to speed up an R function by porting it to Julia, but to my surprise Julia was slower. The function sequentially updates a list of vectors (array of arrays in Julia). Beforehand the index of the list element to be updated is unknown and the length of the new vector is unknown. I have written a test function that demonstrates the behavior.
Julia
function MyTest(n)
a = [[0.0] for i in 1:n]
for i in 1:n
a[i] = cumsum(ones(i))
end
a
end
R
MyTest <- function(n){
a <- as.list(rep(0, n))
for (i in 1:n)
a[[i]] <- cumsum(rep(1, i))
a
}
By setting n to 5000, 10000 and 20000, typical computing times are (median of 21 tests):
- R: 0.14, 0.45, and 1.28 seconds
- Julia: 0.31, 3.38, and 27.03 seconds
I used a windows-laptop with 64 bit Julia-1.3.1 and 64 bit R-3.6.1.
Both these functions use 64 bit floating-point types. My real problem involves integers and then R is even more favorable. But integer comparison isn’t fair since R uses 32 bit integers and Julia 64 bit. Is it something I can do to speed up Julia or is really Julia much slower than R in this case?
using BenchmarkTools; @btime MyTest(5000);
I get39.342 ms
on my personal laptop, andusing BenchmarkTools; @btime MyTest(20000);
takes766.407 ms
– giordano@time MyTest(20000);
I get1.005717 seconds
– giordanoInt32
in julia as well, should you want to. – Fredrik Bagge