13
votes

I have a huge memory block (bit-vector) with size N bits within one memory page, consider N on average is 5000, i.e. 5k bits to store some flags information.
At a certain points in time (super-frequent - critical) I need to find the first bit set in this whole big bit-vector. Now I do it per-64-word, i.e. with help of __builtin_ctzll). But when N grows and search algorithm cannot be improved, there can be some possibility to scale this search through the expansion of memory access width. This is the main problem in a few words

There is a single assembly instruction called BSF that gives the position of the highest set bit (GCC's __builtin_ctzll()). So in arch I can find the highest bit set cheaply in 64-bit words.

But what about scaling through memory width?
E.g. is there a way to do it efficiently with 128 / 256 / 512 -bit registers?
Basically I'm interested in some C API function to achieve this, but also want to know what this method is based on.

UPD: As for CPU, I'm interested for this optimization to support the following CPU lineups:
Intel Xeon E3-12XX, Intel Xeon E5-22XX/26XX/E56XX, Intel Core i3-5XX/4XXX/8XXX, Intel Core i5-7XX, Intel Celeron G18XX/G49XX (optional for Intel Atom N2600, Intel Celeron N2807, Cortex-A53/72)

P.S. In mentioned algorithm before the final bit scan I need to sum k (in average 20-40) N-bit vectors with CPU AND (the AND result is just a preparatory stage for the bit-scan). This is also desirable to do with memory width scaling (i.e. more efficiently than per 64bit-word AND)

Read also: Find first set

3
Registers aren't visible to other threads / CPU cores, and "atomically" reading your own private state either has no meaning or is trivial. Did you mean single asm instruction or something so some change to a register is atomic wrt. interrupts? But if you're interested in that, how does the C tag make sense?Peter Cordes
If you insist on it being "atomic" or a single CPU instruction, you're out of luck. (Except maybe on RISC-V128.) For a huge array, you scan for the first non-zero vector, and load the byte or dword that contains it, then bit-scan that and add the vector and element offsets. Make up your mind what you want your question to be about, and take out the word "atomic" from the title if you want the answer to be anything other than "impossible". (Or define exactly what you mean, atomic wrt. what possible observers.)Peter Cordes
@PeterCordes I have a huge memory block (bit-vector) within one memory page with size N, consider N is 5000, i.e. 5k bits to store some flags information. At a certain points in time (very often) I need to find the first bit set in this big bit-vector. Now I do it per-64-word, i.e. with help of ` __builtin_clzll()`. But when N grows and search algorithm cannot be improved, there can be some possibility to scale this search through the expansion of memory access width. This is the main problem in a few wordsred0ct
@PeterCordes It doesn't matter ;) I can dispose these bits inside as I like. And search from any suitable sidered0ct
@red0ct Okay, for ARM that would be ARMv8-A, no extensions. For x86, the optional parts go up to SSSE3 (so no AVX and no SSE4.1, both of which would help). As for the parts in your first list, the intel Core i3-5XX and i5-7XX parts are the oldest and only go up to SSE4.2. Okay. I can work with that.fuz

3 Answers

3
votes

The best way to find the first set bit within a whole vector (AFAIK) involves finding the first non-zero SIMD element (e.g. a byte or dword), then using a bit-scan on that. (__builtin_ctz / bsf / tzcnt / ffs-1) . As such, ctz(vector) is not itself a useful building block for searching an array, only for after the loop.

Instead you want to loop over the array searching for a non-zero vector, using a whole-vector check involving SSE4.1 ptest xmm0,xmm0 / jz .loop (3 uops), or with SSE2 pcmpeqd v, zero / pmovmskb / cmp eax, 0xffff / je .loop (3 uops after cmp/jcc macro-fusion). https://uops.info/

Once you do find a non-zero vector, pcmpeqb / movmskps / bsf on that to find a dword index, then load that dword and bsf it. Add the start-bit position (CHAR_BIT*4*dword_idx) to the bsf bit-position within that element. This is a fairly long dependency chain for latency, including an integer L1d load latency. But since you just loaded the vector, at least you can be fairly confident you'll hit in cache when you load it again with integer. (If the vector was generated on the fly, then probably still best to store / reload it and let store-forwarding work, instead of trying to generate a shuffle control for vpermilps/movd or SSSE3 pshufb/movd/movzx ecx, al.)

The loop problem is very much like strlen or memchr, except we're rejecting a single value (0) and looking for anything else. Still, we can take inspiration from hand-optimized asm strlen / memchr implementations like glibc's, for example loading multiple vectors and doing one check to see if any of them have what they're looking for. (For strlen, combine with pminub to get a 0 if any element is 0. For pcmpeqb compare results, OR for memchr). For our purposes, the reduction operation we want is OR - any non-zero input will make the output non-zero, and bitwise boolean ops can run on any vector ALU port.

(If the expected first-bit-position isn't very high, it's not worth being too aggressive with this: if the first set bit is in the first vector, sorting things out between 2 vectors you've loaded will be slower. 5000 bits is only 625 bytes, or 19.5 AVX2 __m256i vectors. And the first set bit is probably not always right at the end)

AVX2 version:

This checks pairs of 32-byte vectors (i.e. whole cache lines) for non-zero, and if found then sorts that out into one 64-bit bitmap for a single CTZ operation. That extra shift/OR costs latency in the critical path, but the hope is that we get to the first 1 bit sooner.

Combining 2 vectors down to one with OR means it's not super useful to know which element of the OR result was non-zero. We basically redo the work inside the if. That's the price we pay for keeping the amount of uops low for the actual search part.

(The if body ends with a return, so in the asm it's actually like an if()break, or actually an if()goto out of the loop since it goes to a difference place than the not-found return -1 from falling through out of the loop.)

// untested, especially the pointer end condition, but compiles to asm that looks good
// Assumes len is a multiple of 64 bytes

#include <immintrin.h>
#include <stdint.h>
#include <string.h>

// aliasing-safe: p can point to any C data type
int bitscan_avx2(const char *p, size_t len /* in bytes */)
{
    //assert(len % 64 == 0);
    //optimal if p is 64-byte aligned, so we're checking single cache-lines
    const char *p_init = p;
    const char *endp = p + len - 64;
    do {
        __m256i v1 = _mm256_loadu_si256((const __m256i*)p);
        __m256i v2 = _mm256_loadu_si256((const __m256i*)(p+32));
        __m256i or = _mm256_or_si256(v1,v2);
        if (!_mm256_testz_si256(or, or)){        // find the first non-zero cache line
            __m256i v1z = _mm256_cmpeq_epi32(v1, _mm256_setzero_si256());
            __m256i v2z = _mm256_cmpeq_epi32(v2, _mm256_setzero_si256());
            uint32_t zero_map = _mm256_movemask_ps(_mm256_castsi256_ps(v1z));
            zero_map |= _mm256_movemask_ps(_mm256_castsi256_ps(v2z)) << 8;

            unsigned idx = __builtin_ctz(~zero_map);  // Use ctzll for GCC, because GCC is dumb and won't optimize away a movsx
            uint32_t nonzero_chunk;
            memcpy(&nonzero_chunk, p+4*idx, sizeof(nonzero_chunk));  // aliasing / alignment-safe load

            return (p-p_init + 4*idx)*8 + __builtin_ctz(nonzero_chunk);
        }
        p += 64;
    }while(p < endp);
    return -1;
}

On Godbolt with clang 12 -O3 -march=haswell:

bitscan_avx2:
        lea     rax, [rdi + rsi]
        add     rax, -64                 # endp
        xor     ecx, ecx
.LBB0_1:                                # =>This Inner Loop Header: Depth=1
        vmovdqu ymm1, ymmword ptr [rdi]  # do {
        vmovdqu ymm0, ymmword ptr [rdi + 32]
        vpor    ymm2, ymm0, ymm1
        vptest  ymm2, ymm2
        jne     .LBB0_2                       # if() goto out of the inner loop
        add     ecx, 512                      # bit-counter incremented in the loop, for (p-p_init) * 8
        add     rdi, 64
        cmp     rdi, rax
        jb      .LBB0_1                  # }while(p<endp)

        mov     eax, -1               # not-found return path
        vzeroupper
        ret

.LBB0_2:
        vpxor   xmm2, xmm2, xmm2
        vpcmpeqd        ymm1, ymm1, ymm2
        vmovmskps       eax, ymm1
        vpcmpeqd        ymm0, ymm0, ymm2
        vmovmskps       edx, ymm0
        shl     edx, 8
        or      edx, eax             # mov ah,dl  would be interesting, but compilers won't do it.
        not     edx                  # one_positions = ~zero_positions
        xor     eax, eax                # break false dependency
        tzcnt   eax, edx             # dword_idx
        xor     edx, edx
        tzcnt   edx, dword ptr [rdi + 4*rax]   # p[dword_idx]
        shl     eax, 5               # dword_idx * 4 * CHAR_BIT
        add     eax, edx
        add     eax, ecx
        vzeroupper
        ret

This is probably not optimal for all CPUs, e.g. maybe we could use a memory-source vpcmpeqd for at least one of the inputs, and not cost any extra front-end uops, only back-end. As long as compilers keep using pointer-increments, not indexed addressing modes that would un-laminate. That would reduce the amount of work needed after the branch (which probably mispredicts).

To still use vptest, you might have to take advantage of the CF result from the CF = (~dst & src == 0) operation against a vector of all-ones, so we could check that all elements matched (i.e. the input was all zeros). Unfortunately, Can PTEST be used to test if two registers are both zero or some other condition? - no, I don't think we can usefully use vptest without a vpor.

Clang decided not to actually subtract pointers after the loop, instead to do more work in the search loop. :/ The loop is 9 uops (after macro-fusion of cmp/jb), so unfortunately it can only run a bit less than 1 iteration per 2 cycles. So it's only managing less than half of L1d cache bandwidth.

But apparently a single array isn't your real problem.

Without AVX

16-byte vectors mean we don't have to deal with the "in-lane" behaviour of AVX2 shuffles. So instead of OR, we can combine with packssdw or packsswb. Any set bits in the high half of a pack input will signed-saturate the result to 0x80 or 0x7f. (So signed saturation is key, not unsigned packuswb which will saturate signed-negative inputs to 0.)

However, shuffles only run on port 5 on Intel CPUs, so beware of throughput limits. ptest on Skylake for example is 2 uops, p5 and p0, so using packsswb + ptest + jz would limit to one iteration per 2 clocks. But pcmpeqd + pmovmskb don't.

Unfortunately, using pcmpeq on each input separately before packing / combining would cost more uops. But would reduce the amount of work left for the cleanup, and if the loop-exit usually involves a branch mispredict, that might reduce overall latency.

2x pcmpeqd => packssdw => pmovmskb => not => bsf would give you a number you have to multiply by 2 to use as a byte offset to get to the non-zero dword. e.g. memcpy(&tmp_u32, p + (2*idx), sizeof(tmp_u32));. i.e. bsf eax, [rdi + rdx*2].

With AVX-512:

You mentioned 512-bit vectors, but none of the CPUs you listed support AVX-512. Even if so, you might want to avoid 512-bit vectors because SIMD instructions lowering CPU frequency, unless your program spends a lot of time doing this, and your data is hot in L1d cache so you can truly benefit instead of still bottlenecking on L2 cache bandwidth. But even with 256-bit vectors, AVX-512 has new instructions that are useful for this:

  • integer compares (vpcmpb/w/d/q) have a choice of predicate, so you can do not-equal instead of having to invert later with NOT. Or even test-into-register vptestmd so you don't need a zeroed vector to compare against.
  • compare-into-mask is sort of like pcmpeq + movmsk, except the result is in a k register, still need a kmovq rax, k0 before you can tzcnt.
  • kortest - set FLAGS according to the OR of two mask registers being non-zero. So the search loop could do vpcmpd k0, ymm0, [rdi] / vpcmpd k1, ymm0, [rdi+32] / kortestw k0, k1

ANDing multiple input arrays

You mention your real problem is that you have up-to-20 arrays of bits, and you want to intersect them with AND and find the first set bit in the intersection.

You may want to do this in blocks of a few vectors, optimistically hoping that there will be a set bit somewhere early.

AND groups of 4 or 8 inputs, accumulating across results with OR so you can tell if there were any 1s in this block of maybe 4 vectors from each input. (If there weren't any 1 bits, do another block of 4 vectors, 64 or 128 bytes while you still have the pointers loaded, because the intersection would definitely be empty if you moved on to the other inputs now). Tuning these chunk sizes depends on how sparse your 1s are, e.g. maybe always work in chunks of 6 or 8 vectors. Power-of-2 numbers are nice, though, because you can pad your allocations out to a multiple of 64 or 128 bytes so you don't have to worry about stopping early.)

(For odd numbers of inputs, maybe pass the same pointer twice to a function expecting 4 inputs, instead of dispatching to special versions of the loop for every possible number.)

L1d cache is 8-way associative (before Ice Lake with 12-way), and a limited number of integer/pointer registers can make it a bad idea to try to read too many streams at once. You probably don't want a level of indirection that makes the compiler loop over an actual array in memory of pointers either.

8
votes

This answer is in a different vein, but if you know in advance that you're going to be maintaining a collection of B bits and need to be able to efficiently set and clear bits while also figuring out which bit is the first bit set, you may want to use a data structure like a van Emde Boas tree or a y-fast trie. These data structures are designed to store integers in a small range, so instead of setting or clearing individual bits, you could add or remove the index of the bit you want to set/clear. They're quite fast - you can add or remove items in time O(log log B), and they let you find the smallest item in time O(1). Figure that if B ≈ 50000, then log log B is about 4.

I'm aware this doesn't directly address how to find the highest bit set in a huge bitvector. If your setup is such that you have to work with bitvectors, the other answers might be more helpful. But if you have the option to reframe the problem in a way that doesn't involve bitvector searching, these other data structures might be a better fit.

0
votes

You may try this function, your compiler should optimize this code for your CPU. It's not super perfect, but it should be relatively quick and mostly portable.

PS length should be divisible by 8 for max speed

#include <stdio.h>
#include <stdint.h>

/* Returns the index position of the most significant bit; starting with index 0. */
/* Return value is between 0 and 64 times length. */
/* When return value is exact 64 times length, no significant bit was found, aka bf is 0. */
uint32_t offset_fsb(const uint64_t *bf, const register uint16_t length){
    register uint16_t i = 0;
    uint16_t remainder = length % 8;

    switch(remainder){
        case 0 : /* 512bit compare */
            while(i < length){
                if(bf[i] | bf[i+1] | bf[i+2] | bf[i+3] | bf[i+4] | bf[i+5] | bf[i+6] | bf[i+7]) break;
                i += 8;
            }
            /* fall through */

        case 4 : /* 256bit compare */
            while(i < length){
                if(bf[i] | bf[i+1] | bf[i+2] | bf[i+3]) break;
                i += 4;
            }
            /* fall through */

        case 6 : /* 128bit compare */    
            /* fall through */
        case 2 : /* 128bit compare */
            while(i < length){
                if(bf[i] | bf[i+1]) break;
                i += 2;
            }
            /* fall through */

        default : /* 64bit compare */
            while(i < length){
                if(bf[i]) break;
                i++;
            }
    }

    register uint32_t offset_fsb = i * 64;

    /* Check the last uint64_t if the last uint64_t is not 0. */
    if(bf[i]){
        register uint64_t s = bf[i];
        offset_fsb += 63;
        while(s >>= 1) offset_fsb--;
    }

    return offset_fsb;
}

int main(int argc, char *argv[]){
    uint64_t test[16];
    test[0] = 0;
    test[1] = 0;
    test[2] = 0;
    test[3] = 0;
    test[4] = 0;
    test[5] = 0;
    test[6] = 0;
    test[7] = 0;
    test[8] = 0;
    test[9] = 0;
    test[10] = 0;
    test[11] = 0;
    test[12] = 0;
    test[13] = 0;
    test[14] = 0;
    test[15] = 1;

    printf("offset_fsb = %d\n", offset_fsb(test, 16));

    return 0;
}