31
votes

I am in a need of fast integer square root that does not involve any explicit division. The target RISC architecture can do operations like add, mul, sub, shift in one cycle (well - the operation's result is written in third cycle, really - but there's interleaving), so any Integer algorithm that uses these ops and is fast would be very appreciated.

This is what I have right now and I'm thinking that a binary search should be faster, since the following loop executes 16 times every single time (regardless of the value). I haven't debugged it extensively yet (but soon), so perhaps it's possible to have an early exit there:

unsigned short int int_sqrt32(unsigned int x)
{
    unsigned short int res=0;
    unsigned short int add= 0x8000;   
    int i;
    for(i=0;i<16;i++)
    {
        unsigned short int temp=res | add;
        unsigned int g2=temp*temp;      
        if (x>=g2)
        {
            res=temp;           
        }
        add>>=1;
    }
    return res;
}

Looks like the current performance cost of the above [in the context of the target RISC] is a loop of 5 instructions (bitset, mul, compare, store, shift). Probably no space in cache to unroll fully (but this will be the prime candidate for a partial unroll [e.g. A loop of 4 rather than 16], for sure). So, the cost is 16*5 = 80 instructions (plus loop overhead, if not unrolled). Which, if fully interleaved, would cost only 80 (+2 for last instruction) cycles.

Can I get some other sqrt implementation (using only add, mul, bitshift, store/cmp) under 82 cycles?

FAQ:

  • Why don't you rely on the compiler to produce a good fast code?

    There is no working C → RISC compiler for the platform. I will be porting the current reference C code into hand-written RISC ASM.

  • Did you profile the code to see if sqrt is actually a bottleneck?

    No, there is no need for that. The target RISC chip is about twenty MHz, so every single instruction counts. The core loop (calculating the energy transfer form factor between the shooter and receiver patch), where this sqrt is used, will be run ~1,000 times each rendering frame (assuming it will be fast enough, of course), up to 60,000 per second, and roughly 1,000,000 times for whole demo.

  • Have you tried to optimize the algorithm to perhaps remove the sqrt?

    Yes, I did that already. In fact, I got rid of 2 sqrts already and lots of divisions (removed or replaced by shifting). I can see a huge performance boost (compared to the reference float version) even on my gigahertz notebook.

  • What is the application?

    It's a real-time progressive-refinement radiosity renderer for the compo demo. The idea is to have one shooting cycle each frame, so it would visibly converge and look better with each rendered frame (e.g. Up 60-times per second, though the SW rasterizer won't probably be that fast [but at least it can run on the other chip in parallel with the RISC - so if it takes 2-3 frames to render the scene, the RISC will have worked through 2-3 frames of radiosity data, in parallel]).

  • Why don't you work directly in target ASM?

    Because radiosity is a slightly involved algorithm and I need the instant edit & continue debugging capability of Visual Studio. What I've done over the weekend in VS (couple hundred code changes to convert the floating-point math to integer-only) would take me 6 months on the target platform with only printing debugging".

  • Why can't you use a division?

    Because it's 16-times slower on the target RISC than any of the following: mul, add, sub, shift, compare, load/store (which take just 1 cycle). So, it's used only when absolutely required (a couple times already, unfortunately, when shifting could not be used).

  • Can you use look-up tables?

    The engine needs other LUTs already and copying from main RAM to RISC's little cache is prohibitively expensive (and definitely not each and every frame). But, I could perhaps spare 128-256 Bytes if it gave me at least a 100-200% boost for sqrt.

  • What's the range of the values for sqrt?

    I managed to reduce it to mere unsigned 32-bit int (4,294,967,295)

6
If your target is ARM or ARM-like you might want to have a look at finesse.demon.co.uk/steven/sqrt.html for ideas - I'm pretty sure when I used to program ARM in hand written assembly (a long while ago) there was an example of how to do this (along with divide) in the Archimedes manual. I would doubt binary search is faster than Newton Raphson or similar.abligh
A long time ago in a galaxy far away when I was coding silly intros for 80386, I used the Newton algorithm. It still requires division but if you choose a smart starting value, not too many.biziclop
if you have acess to floats.. how about a simple aproximation en.wikipedia.org/wiki/…CaldasGSM
I'm going out on a limb there but what is the distribution of your numbers? If many of them are small, maybe it's worth counting how many bits the result will have first and only run the loop that many iterations. (Of course this only applies if you've got clz or something similar as a 1 cycle instruction.)biziclop
Am I the only one to wonder why the answers come before the questions in the FAQ?abligh

6 Answers

7
votes

Have a look here.

For instance, at 3(a) there is this method, which is trivially adaptable to do a 64->32 bit square root, and also trivially transcribable to assembler:

/* by Jim Ulery */
static unsigned julery_isqrt(unsigned long val) {
    unsigned long temp, g=0, b = 0x8000, bshft = 15;
    do {
        if (val >= (temp = (((g << 1) + b)<<bshft--))) {
           g += b;
           val -= temp;
        }
    } while (b >>= 1);
    return g;
}

No divisions, no multiplications, bit shifts only. However, the time taken will be somewhat unpredictable particularly if you use a branch (on ARM RISC conditional instructions would work).

In general, this page lists ways to calculate square roots. If you happen to want to produce a fast inverse square root (i.e. x**(-0.5) ), or are just interested in amazing ways to optimise code, take a look at this, this and this.

6
votes

This is the same as yours, but with fewer ops. (I count 9 ops in the loop in your code, including test and increment i in the for loop and 3 assignments, but perhaps some of those disappear when coded in ASM? There are 6 ops in the code below, if you count g*g>n as two (no assignment)).

int isqrt(int n) {
  int g = 0x8000;
  int c = 0x8000;
  for (;;) {
    if (g*g > n) {
      g ^= c;
    }
    c >>= 1;
    if (c == 0) {
      return g;
    }
    g |= c;
  }
}

I got it here. You can maybe eliminate a comparison if you unroll the loop and jump to the appropriate spot based on the highest non-zero bit in the input.

Update

I've been thinking more about using Newton's method. In theory, the number of bits of accuracy should double for each iteration. That means Newton's method is much worse than any of the other suggestions when there are few correct bits in the answer; however, the situation changes where there are a lot of correct bits in the answer. Considering that most suggestions seem to take 4 cycles per bit, that means that one iteration of Newton's method (16 cycles for division + 1 for addition + 1 for shift = 18 cycles) is not worthwhile unless it gives over 4 bits.

So, my suggestion is to build up 8 bits of the answer by one of the suggested methods (8*4 = 32 cycles) then perform one iteration of Newton's method (18 cycles) to double the number of bits to 16. That's a total of 50 cycles (plus maybe an extra 4 cycles to get 9 bits before applying Newton's method, plus maybe 2 cycles to overcome the overshoot occasionally experienced by Newton's method). That's a maximum of 56 cycles which as far as I can see rivals any of the other suggestions.

Second Update

I coded the hybrid algorithm idea. Newton's method itself has no overhead; you just apply and double the number of significant digits. The issue is to have a predictable number of significant digits before you apply Newton's method. For that, we need to figure out where the most significant bit of the answer will appear. Using a modification of the fast DeBruijn sequence method given by another poster, we can perform that calculation in about 12 cycles in my estimation. On the other hand, knowing the position of the msb of the answer speeds up all methods (average, not worst case), so it seems worthwhile no matter what.

After calculating the msb of the answer, I run a number of rounds of the algorithm suggested above, then finish it off with one or two rounds of Newton's method. We decide when to run Newton's method by the following calculation: one bit of the answer takes about 8 cycles according to calculation in the comments; one round of Newton's method takes about 18 cycles (division, addition, and shift, and maybe assignment), so we should only run Newton's method if we're going to get at least three bits out of it. So for 6 bit answers, we can run the linear method 3 times to get 3 significant bits, then run Newton's method 1 time to get another 3. For 15 bit answers, we run the linear method 4 times to get 4 bits, then Newton's method twice to get another 4 then another 7. And so on.

Those calculations depend on knowing exactly how many cycles are required to get a bit by the linear method vs. how many are required by Newton's method. If the "economics" change, e.g., by discovering a faster way to build up bits in a linear fashion, the decision of when to invoke Newton's method will change.

I unrolled the loops and implemented the decisions as switches, which I hope will translate into fast table lookups in assembly. I'm not absolutely sure that I've got the minimum number of cycles in each case, so maybe further tuning is possible. E.g., for s=10, you can try to get 5 bits then apply Newton's method once instead of twice.

I've tested the algorithm thoroughly for correctness. Some additional minor speedups are possible if you're willing to accept slightly incorrect answers in some cases. At least two cycles are used after applying Newton's method to correct an off-by-one error that occurs with numbers of the form m^2-1. And a cycle is used testing for input 0 at the beginning, as the algorithm can't handle that input. If you know you're never going to take the square root of zero you can eliminate that test. Finally, if you only need 8 significant bits in the answer, you can drop one of the Newton's method calculations.

#include <inttypes.h>
#include <stdint.h>
#include <stdbool.h>
#include <stdio.h>

uint32_t isqrt1(uint32_t n);

int main() {
  uint32_t n;
  bool it_works = true;
  for (n = 0; n < UINT32_MAX; ++n) {
    uint32_t sr = isqrt1(n);
    if ( sr*sr > n || ( sr < 65535 && (sr+1)*(sr+1) <= n )) {
      it_works = false;
      printf("isqrt(%" PRIu32 ") = %" PRIu32 "\n", n, sr);
    }
  }
  if (it_works) {
    printf("it works\n");
  }
  return 0;
}

/* table modified to return shift s to move 1 to msb of square root of x */
/*
static const uint8_t debruijn32[32] = {
    0, 31, 9, 30, 3,  8, 13, 29,  2,  5,  7, 21, 12, 24, 28, 19,
    1, 10, 4, 14, 6, 22, 25, 20, 11, 15, 23, 26, 16, 27, 17, 18
};
*/

static const uint8_t debruijn32[32] = {
  15,  0, 11, 0, 14, 11, 9, 1, 14, 13, 12, 5, 9, 3, 1, 6,
  15, 10, 13, 8, 12,  4, 3, 5, 10,  8,  4, 2, 7, 2, 7, 6
};

/* based on CLZ emulation for non-zero arguments, from
 * http://stackoverflow.com/questions/23856596/counting-leading-zeros-in-a-32-bit-unsigned-integer-with-best-algorithm-in-c-pro
 */
uint8_t shift_for_msb_of_sqrt(uint32_t x) {
  x |= x >>  1;
  x |= x >>  2;
  x |= x >>  4;
  x |= x >>  8;
  x |= x >> 16;
  x++;
  return debruijn32 [x * 0x076be629 >> 27];
}

uint32_t isqrt1(uint32_t n) {
  if (n==0) return 0;

  uint32_t s = shift_for_msb_of_sqrt(n);
  uint32_t c = 1 << s;
  uint32_t g = c;

  switch (s) {
  case 9:
  case 5:
    if (g*g > n) {
      g ^= c;
    }
    c >>= 1;
    g |= c;
  case 15:
  case 14:
  case 13:
  case 8:
  case 7:
  case 4:
    if (g*g > n) {
      g ^= c;
    }
    c >>= 1;
    g |= c;
  case 12:
  case 11:
  case 10:
  case 6:
  case 3:
    if (g*g > n) {
      g ^= c;
    }
    c >>= 1;
    g |= c;
  case 2:
    if (g*g > n) {
      g ^= c;
    }
    c >>= 1;
    g |= c;
  case 1:
    if (g*g > n) {
      g ^= c;
    }
    c >>= 1;
    g |= c;
  case 0:
    if (g*g > n) {
      g ^= c;
    }
  }

  /* now apply one or two rounds of Newton's method */
  switch (s) {
  case 15:
  case 14:
  case 13:
  case 12:
  case 11:
  case 10:
    g = (g + n/g) >> 1;
  case 9:
  case 8:
  case 7:
  case 6:
    g = (g + n/g) >> 1;
  }

  /* correct potential error at m^2-1 for Newton's method */
  return (g==65536 || g*g>n) ? g-1 : g;
}

In light testing on my machine (which admittedly is nothing like yours), the new isqrt1 routine runs about 40% faster on average than the previous isqrt routine I gave.

4
votes

If multiplication is the same speed (or faster than!) addition and shifting, or if you lack a fast shift-by-amount-contained-in-a-register instruction, then the following will not be helpful. Otherwise:

You're computing temp*temp afresh on each loop cycle, but temp = res | add, which is the same as res + add since their bits don't overlap, and (a) you have already computed res*res on a previous loop cycle, and (b) add has special structure (it's always just a single bit). So, using the fact that (a+b)^2 = a^2 + 2ab + b^2, and you already have a^2, and b^2 is just a single bit shifted twice as far to the left as the single-bit b, and 2ab is just a left-shifted by 1 more position than the location of the single bit in b, you can get rid of the multiplication:

unsigned short int int_sqrt32(unsigned int x)
{
    unsigned short int res = 0;
    unsigned int res2 = 0;
    unsigned short int add = 0x8000;   
    unsigned int add2 = 0x80000000;   
    int i;
    for(i = 0; i < 16; i++)
    {
        unsigned int g2 = res2 + (res << i) + add2;
        if (x >= g2)
        {
            res |= add;
            res2 = g2;
        }
        add >>= 1;
        add2 >>= 2;
    }
    return res;
}

Also I would guess that it's a better idea to use the same type (unsigned int) for all variables, since according to the C standard, all arithmetic requires promotion (conversion) of narrower integer types to the widest type involved before the arithmetic operation is performed, followed by subsequent back-conversion if necessary. (This may of course be optimised away by a sufficiently intelligent compiler, but why take the risk?)

2
votes

From the comment trail, it seems that the RISC processor only provides 32x32->32 bit multiplication and 16x16->32 bit multiplication. A 32x-32->64 bit widening multiply, or a MULHI instruction returning the upper 32 bits of a 64-bit product is not provided.

This would seem to exclude approaches based on Newton-Raphson iteration, which would likely be inefficient, as they typically require either MULHI instruction or widening multiply for the intermediate fixed-point arithmetic.

The C99 code below uses a different iterative approach that requires only 16x16->32 bit multiplies, but converges somewhat linearly, requiring up to six iterations. This approach requires CLZ functionality to quickly determine a starting guess for the iterations. Asker stated in the comments that the RISC processor used does not provide CLZ functionality. So emulation of CLZ is required, and since the emulation adds to both storage and instruction count, this may make this approach uncompetitive. I performed a brute-force search to determine the deBruijn lookup table with the smallest multiplier.

This iterative algorithm delivers raw results quite close to the desired results, i.e. (int)sqrt(x), but always somewhat on the high side due to the truncating nature of integer arithmetic. To arrive at the final result, the result is conditionally decremented until the square of the result is less than or equal to the original argument.

The use of the volatile qualifier in the code only serves to establish that all named variables can in fact be allocated as 16-bit data without impacting the functionality. I do not know whether this provides any advantage, but noticed that the OP specifically used 16-bit variables in their code. For production code, volatile should be removed.

Note that for most processors, the correction steps at the end should not involve any branching. The product y*y can be subtracted from x with carry-out (or borrow-out), then y is corrected by a subtract with carry-in (or borrow-in). So each step should be a sequence MUL, SUBcc, SUBC.

Because implementation of the iteration by a loop incurs substantial overhead, I have elected to completely unroll the loop, but provide two early-out checks. Tallying the operations manually I count 46 operations for the fastest case, 54 operations for the average case, and 60 operations for the worst case.

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>

static const uint8_t clz_tab[32] = {
    31, 22, 30, 21, 18, 10, 29,  2, 20, 17, 15, 13, 9,  6, 28, 1,
    23, 19, 11,  3, 16, 14,  7, 24, 12,  4,  8, 25, 5, 26, 27, 0};

uint8_t clz (uint32_t a)
{
    a |= a >> 16;
    a |= a >> 8;
    a |= a >> 4;
    a |= a >> 2;
    a |= a >> 1;
    return clz_tab [0x07c4acdd * a >> 27];
}
  
/* 16 x 16 -> 32 bit unsigned multiplication; should be single instruction */
uint32_t umul16w (uint16_t a, uint16_t b)
{
    return (uint32_t)a * b;
}

/* Reza Hashemian, "Square Rooting Algorithms for Integer and Floating-Point
   Numbers", IEEE Transactions on Computers, Vol. 39, No. 8, Aug. 1990, p. 1025
*/
uint16_t isqrt (uint32_t x)
{
    volatile uint16_t y, z, lsb, mpo, mmo, lz, t;

    if (x == 0) return x; // early out, code below can't handle zero

    lz = clz (x);         // #leading zeros, 32-lz = #bits of argument
    lsb = lz & 1;
    mpo = 17 - (lz >> 1); // m+1, result has roughly half the #bits of argument
    mmo = mpo - 2;        // m-1
    t = 1 << mmo;         // power of two for two's complement of initial guess
    y = t - (x >> (mpo - lsb)); // initial guess for sqrt
    t = t + t;            // power of two for two's complement of result
    z = y;

    y = (umul16w (y, y) >> mpo) + z;
    y = (umul16w (y, y) >> mpo) + z;
    if (x >= 0x40400) {
        y = (umul16w (y, y) >> mpo) + z;
        y = (umul16w (y, y) >> mpo) + z;
        if (x >= 0x1002000) {
            y = (umul16w (y, y) >> mpo) + z;
            y = (umul16w (y, y) >> mpo) + z;
        }
    }

    y = t - y; // raw result is 2's complement of iterated solution
    y = y - umul16w (lsb, (umul16w (y, 19195) >> 16)); // mult. by sqrt(0.5) 

    if ((int32_t)(x - umul16w (y, y)) < 0) y--; // iteration may overestimate 
    if ((int32_t)(x - umul16w (y, y)) < 0) y--; // result, adjust downward if 
    if ((int32_t)(x - umul16w (y, y)) < 0) y--; // necessary 

    return y; // (int)sqrt(x)
}

int main (void)
{
    uint32_t x = 0;
    uint16_t res, ref;

    do {
        ref = (uint16_t)sqrt((double)x);
        res = isqrt (x);
        if (res != ref) {
            printf ("!!!! x=%08x  res=%08x  ref=%08x\n", x, res, ref);
            return EXIT_FAILURE;
        }
        x++;
    } while (x);
    return EXIT_SUCCESS;
}

Another possibility is to use the Newton iteration for the square root, despite the high cost of division. For small inputs only one iteration will be required. Although the asker did not state this, based on the execution time of 16 cycles for the DIV operation I strongly suspect that this is actually a 32/16->16 bit division which requires additional guard code to avoid overflow, defined as a quotient that does not fit into 16 bits. I have added appropriate safeguards to my code based on this assumption.

Since the Newton iteration doubles the number of good bits each time it is applied, we only need a low-precision initial guess which can easily be retrieved from a table based on the five leading bits of the argument. In order to grab these, we first normalize the argument into 2.30 fixed-point format with an additional implicit scale factor of 232-(lz & ~1) where lz are the number of leading zeros in the argument. As in the previous approach the iteration doesn't always deliver an accurate result, so a correction must be applied should the preliminary result be too big. I count 49 cycles for the fast path, 70 cycles for the slow path (average 60 cycles).

static const uint16_t sqrt_tab[32] = 
{ 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
  0x85ff, 0x8cff, 0x94ff, 0x9aff, 0xa1ff, 0xa7ff, 0xadff, 0xb3ff,
  0xb9ff, 0xbeff, 0xc4ff, 0xc9ff, 0xceff, 0xd3ff, 0xd8ff, 0xdcff, 
  0xe1ff, 0xe6ff, 0xeaff, 0xeeff, 0xf3ff, 0xf7ff, 0xfbff, 0xffff
};

/* 32/16->16 bit division. Note: Will overflow if x[31:16] >= y */
uint16_t udiv_32_16 (uint32_t x, uint16_t y)
{
    uint16_t r = x / y;
    return r;
}

/* table lookup for initial guess followed by division-based Newton iteration*/ 
uint16_t isqrt (uint32_t x)
{
    volatile uint16_t q, lz, y, i, xh;

    if (x == 0) return x; // early out, code below can't handle zero

    // initial guess based on leading 5 bits of argument normalized to 2.30
    lz = clz (x);
    i = ((x << (lz & ~1)) >> 27);
    y = sqrt_tab[i] >> (lz >> 1);
    xh = (x >> 16); // needed for overflow check on division

    // first Newton iteration, guard against overflow in division
    q = 0xffff;
    if (xh < y) q = udiv_32_16 (x, y);
    y = (q + y) >> 1;
    if (lz < 10) {
        // second Newton iteration, guard against overflow in division
        q = 0xffff;
        if (xh < y) q = udiv_32_16 (x, y);
        y = (q + y) >> 1;
    }

    if (umul16w (y, y) > x) y--; // adjust quotient if too large

    return y; // (int)sqrt(x)
}
1
votes

Here is a less incremental version of the technique @j_random_hacker described. On at least one processor it was just a bit faster when I fiddled with this a couple of years ago. I have no idea why.

// assumes unsigned is 32 bits
unsigned isqrt1(unsigned x) {
  unsigned r = 0, r2 = 0; 
  for (int p = 15; p >= 0; --p) {
    unsigned tr2 = r2 + (r << (p + 1)) + (1u << (p + p));
    if (tr2 <= x) {
      r2 = tr2;
      r |= (1u << p);
    }
  }
  return r;
}

/*
gcc 6.3 -O2
isqrt(unsigned int):
        mov     esi, 15
        xor     r9d, r9d
        xor     eax, eax
        mov     r8d, 1
.L3:
        lea     ecx, [rsi+1]
        mov     edx, eax
        mov     r10d, r8d
        sal     edx, cl
        lea     ecx, [rsi+rsi]
        sal     r10d, cl
        add     edx, r10d
        add     edx, r9d
        cmp     edx, edi
        ja      .L2
        mov     r11d, r8d
        mov     ecx, esi
        mov     r9d, edx
        sal     r11d, cl
        or      eax, r11d
.L2:
        sub     esi, 1
        cmp     esi, -1
        jne     .L3
        rep ret
*/

If you turn up gcc 9 x86 optimization, it completely unrolls the loop and folds constants. The result is still only about 100 instructions.

-1
votes

I don't know how to turn it into an efficient algorithm but when I investigated this in the '80s an interesting pattern emerged. When rounding square roots, there are two more integers with that square root than the preceding one (after zero).

So, one number (zero) has a square root of zero, two have a square root of 1 (1 and 2), 4 have a square root of two (3, 4, 5 and 6) and so on. Probably not a useful answer but interesting nonetheless.