1
votes

I have tried to understand the time complexity of the below code. When I try to calculate the timecomplexity myself it is coming out to be o(n). Because the "NO_OF_BITS" is always same for any int and therefore the loop doesn't increase/decrease with the input. I was not sure about the bitwise operations that are executed inside the loop. Can anyone help me understand/calculate the time complexity of this code.

    unsigned int reverseBits(unsigned int num)
    {
       int  NO_OF_BITS = sizeof(num) * 8;
       int reverse_num = 0;
       int i;
       for (i = 0; i < NO_OF_BITS; i++)
     {
       if((num & (1 << i)))
         reverse_num |= 1 << ((NO_OF_BITS - 1) - i);  
     }
       return reverse_num;
    }              
2
Well, yes (num & (1 << i)) is executed N times. All bits are looked at. Who said this is O(log N)?OneCricketeer
@cricket_007: N as in num, not NO_OF_BITS.Eric Duminil
Why do you think this is O(logN) ?Lasse V. Karlsen
@Eric I guess it wasn't stated what N wasOneCricketeer
I don't think that we should make the mistake here of considering a time complexity for an implementation (which is limited by irrelevant platform constraints), we should rather consider the algorithm.harold

2 Answers

5
votes

With your logic, the code is actually O(1) because NO_OF_BITS will always be 32.

If you talk about time complexity and asymptotic behaviour, you have to increase the input size though. This code is O(NO_OF_BITS), which is O(log(num)).

Since the input of the function is num, it makes sense to define n as num, not as NO_OF_BITS.

1
votes

For the reasons you stated, the complexity is not o(n), it is o(1) - that is, constant. Whatever number you plug in, the cost will always be the same and only depend on NUM_OF_BITS.

But if you consider the order by the maximum number N, that is: how does the time vary if you support up to N1 (say 256), N2 (say 65536), 1048586 and so on... then you'll see that you need an increasing number of bits, correct? And therefore, the time increases. How much does it increase? Every doubling of the maximum number requires one extra bit, which means the time grows like the base 2 logarithm of MaxN. Hence, perhaps, the o(log n) you're looking for.