7
votes

What is the complexity of converting a very large n-bit number to a decimal representation?

My thought is that the elementary algorithm of repeated integer division, taking the remainder to get each digit, would have O(M(n)log n) complexity, where M(n) is the complexity of the multiplication algorithm; however, the division is not between 2 n-bit numbers but rather 1 n-bit number and a small constant number, so it seems to me the complexity could be smaller.

1
@xdavidliu: There is no need to spend O(M(n)) time to compute the quotient and remainder of a large integer by 10. Linear time suffices. - tmyklebu
@tmyklebu never mind, yes that is correct - xdavidliu

1 Answers

2
votes

Naive base-conversion as you described takes quadratic time; you do about n bigint-by-smallint divisions, most of which take time linear in the size of the n-bit bigint.

You can do base conversion in O(M(n) log(n)) time, however, by picking a power of target-base that's roughly the square root of the to-be-converted number, doing divide-and-remainder by it (which can be done in O(M(n)) time via Newton's method), and recursing on the two halves.