2
votes

I would like to modify a global variable which is shared by different tasks and IRQ contexts in a RTOS. Therefore I need to modify this variable atomically. In my current implementation, I have been using enable_irq/disable_irq functions to modify the statement atomically.

extern int g_var;

void set_bit_atomic(int mask)
{
    disable_irq();
    g_var |= mask;
    enable_irq();
}

I've found the __sync_bool_compare_and_swap function in GCC documentation as a helper for atomic operations.

My current toolchain is KEIL MDK, and I would like to switch to the approach shown below,

void set_bit_atomic(int mask)
{
    volatile int tmp;
    do {
        tmp = g_var;
    } while (!__sync_bool_compare_and_swap(&g_var, tmp, tmp | mask));
}

How can I write __sync_bool_compare_and_swap function in ARMv4 command set(as inline assembly)?

1
You could see how GCC does it in assembly and clone it with modifications (gcc -S file.c)user1551592
ARM uses load-linked/store-conditional instructions to perform atomic operations. (See the links in this question maybe.)Kerrek SB
I'm not convinced that there is a huge amount of difference. From what I understand, ARMV6 is required for the LDREX/STREX instruction, which is the what does a "compared and swap" pair of instructions on ARM.Mats Petersson
@Kerrek SB Link shows ARMv6 or above instruction sets. I am using ARM7TDMI (ARMv4), and unfortunately it does not support these functions.albin
@KerrekSB: That won't help for AMRv4 architecture, as far as I understand.Mats Petersson

1 Answers

1
votes

I have found a similar implementation for __kernel_cmpxchg function in Linux kernel source.

It has been written for ARMv5 and earlier, and It seems to work for ARM7TDMI (ARMv4).

1:      ldr     r3, [r2]        @ load current val
        subs    r3, r3, r0      @ compare with oldval
2:      streq   r1, [r2]        @ store newval if eq
        rsbs    r0, r3, #0      @ set return val and C flag
        bx      lr              @ or "mov pc, lr" if no thumb support

Details can be found at this link.

There are two important issues that I would like to warn,

1- __kernel_cmpxchg returns 0 when swap occurred, while __sync_bool_compare_and_swap function returns true.

2- function prototypes are different.

typedef int (*__kernel_cmpxchg_t)(int oldval, int newval, volatile int *ptr);
#define __kernel_cmpxchg ((__kernel_cmpxchg_t)0xffff0fc0)

bool __sync_bool_compare_and_swap (type *ptr, type oldval type newval, ...)

Therefore I had to change the usage as below,

void set_bit_atomic(int mask)
{
    volatile int tmp;
    do {
        tmp = g_var;
    } while (my_compare_and_swap(tmp, tmp | mask, &g_var));
}

Caveat: This code does not work properly without kernel support. See the comments below.