The number of digits in some base B necessary to represent a number n is given by
⌈logB(n + 1)⌉
This means that hexadecimal numbers are more efficient than decimal numbers whenever
⌈log16(n + 1)⌉ < ⌈log10(n + 1)⌉
If you have some number n to test, you can just plug it into this formula to see whether hexadecimal or decimal will be more efficient.
But let's see if we can plot out the ranges where you need some number of digits to represent something. We get this table here:
Num Digits Decimal Cutoff Hex Cutoff
----------------------------------------------------
1 0 0
2 10 16
3 100 256
4 1000 4096
5 10000 65536
6 100000 1048576
7 1000000 16777216
Notice that when we hit six decimal digits, it's never more efficient to write the number in decimal, since a decimal number with six digits is at most 999999 and a hexadecimal number with six digits is at most 16777215. So from 100000 onward, you are better off writing the number in hexadecimal than in decimal.
EDIT: Because you are counting the 0x characters as part of the total number of digits required, you're going to be looking for the first number where
⌈log16(n + 1)⌉ + 2 < ⌈log10(n + 1)⌉
In this case, the table looks like this:
Num Digits Decimal Cutoff Hex Cutoff
----------------------------------------------------
0 100 1
1 1000 16
2 10000 256
3 100000 4096
4 1000000 65536
5 10000000 1048576
6 100000000 16777216
7 1000000000 268435456
8 10000000000 4294967296
9 100000000000 68719476736
10 1000000000000 1099511627776
11 10000000000000 17592186044416
In this case, the point at which the two representations meet is 1099511627776, which requires only 11 hex digits but 13 decimal digits. From this point forward, you are always at least as well off using hexadecimal. Once you hit 10000000000000, you are strictly better off using hex.
Hope this helps!