For example in C#, C++, Java or JavaScript effective int size is 32 bits. If we want to calculate some large number, for example, 70 bits, we should use some software features (Arbitrary-precision arithmetic).
Python has a very tricky integer internal boundless representation and I'm can not figure out what is the most efficient int size for integer arithmetic.
In other words, do we have some int size, say 64 bits, for effective ints usage?
Or it does not matter is it 16
, 32
, 64
or some random bits count
, and Python will work for all these ints with the same efficiency?
In short does Python always uses Arbitrary-precision arithmetic or for 32\64 it uses hardware arithmetic?
array
module for primitive, numeric arrays which sometimes suffices (if you are going to do any heavy-duty processing, then probably numpy is what you want) - juanpa.arrivillagaint
object is an implementation detail. I don't understand what you are asking here "do we have some int size, say 64 bits, for effective ints usage?" - juanpa.arrivillagaHardware
, for integers with size smaller or equal CPU's word.Software
, when we can calculate large numbers (larger that CPU's word), but in a slow manner using some code. Does Python always use slow software arithmetic? Or does it use hardware arithmetic for 32/64 bits? - No Name QA