This looks like calloc hitting a threshold where it makes an OS request for zeroed memory and doesn't need to initialize it manually. Looking through the source code, numpy.zeros eventually delegates to calloc to acquire a zeroed memory block, and if you compare to numpy.empty, which doesn't perform initialization:
In [15]: %timeit np.zeros((5000, 5000))
The slowest run took 12.65 times longer than the fastest. This could mean that a
n intermediate result is being cached.
100000 loops, best of 3: 10 µs per loop
In [16]: %timeit np.empty((5000, 5000))
The slowest run took 5.05 times longer than the fastest. This could mean that an
intermediate result is being cached.
100000 loops, best of 3: 10.3 µs per loop
you can see that np.zeros has no initialization overhead for the 5000x5000 array.
In fact, the OS isn't even "really" allocating that memory until you try to access it. A request for terabytes of array succeeds on a machine without terabytes to spare:
In [23]: x = np.zeros(2**40) # No MemoryError!
callochitting a threshold where it requests zeroed memory from the OS and doesn't need to actually initialize it. - user2357112 supports Monicanp.zeros(S)changes from 5.5 ms per loop to 9.6 µs per loop. However, the number of loops in%timeitsimultaneously changes from 100 to 100,000. My guess is that for an array of certain size and above, the difference between the slowest and fastest runs becomes large enough to trigger 1000 times more loops, which drastically improves the measurement accuracy and reduces the reported running time. Not because it is shorter, but because it is measured more accurately. - DYZtimeit.timeitfunction, controlling the number at1000, and I'm getting0.343710215005558for (1000,1000) and0.0028691469924524426for (5000,5000) - juanpa.arrivillaga