I'm trying using the %timeit magic but it gives me a result that I don't want.
The code I'm running as ipython instance is:
import numpy as np
a = np.random.random((1024,1024))
b = np.random.random((1024,1024))
%timeit a+b
The output is: The slowest run took 4.09 times longer than the fastest. This could mean that an intermediate result is being cached. 100 loops, best of 3: 1.93 ms per loop
Normally in other computers I obtain the average time and the standard deviation, I'm interested in both these results but I cannot figure out how could I fix this. I tried looking around but nobody is using the magic but the library timeit. I'd like to how to change the default behavior of this magic