1
votes
class Foo{
public:
    void fetch(void)
    {
        int temp=-1;
        someSlowFunction(&temp);
        bar=temp;
    }
    int getBar(void)
    {
        return bar;
    }
    void someSlowFunction(int *ptr)
    {
        usleep(10000);
        *ptr=0;
    }
private:
    int bar;
};

I'm new to atomic operations so I may get some concepts wrong.

Considering above code, assuming loading and storing int type are atomic[Note 1], then getBar() could only get the bar before or after a fetch().

However, if a compiler is smart enough, it could optimize away temp and change it to:

    void Foo::fetch(void)
    {
        bar=-1;
        someSlowFunction(&bar);
    }

Then in this case getBar() could get -1 or other intermediate state inside someSlowFunction() under certain timing conditions.

Is this risk possible? Does the standard prevent such optimizations?


Note 1: http://preshing.com/20130618/atomic-vs-non-atomic-operations/

The language standards have nothing to say about atomicity in this case. Maybe integer assignment is atomic, maybe it isn’t. Since non-atomic operations don’t make any guarantees, plain integer assignment in C is non-atomic by definition.

In practice, we usually know more about our target platforms than that. For example, it’s common knowledge that on all modern x86, x64, Itanium, SPARC, ARM and PowerPC processors, plain 32-bit integer assignment is atomic as long as the target variable is naturally aligned. You can verify it by consulting your processor manual and/or compiler documentation. In the games industry, I can tell you that a lot of 32-bit integer assignments rely on this particular guarantee.

I'm targeting ARM Cortex-A8 here, so I consider this a safe assumption.

4
If you need sychornization then use a std::mutex or std::atomicNathanOliver
"assuming loading and storing int type are atomic" When is that true?Lightness Races in Orbit
"then getBar() could only get the bar before or after a fetch()" Not true.Lightness Races in Orbit
"if a compiler is smart enough, it could optimize away temp and change it to:" Also not true.Lightness Races in Orbit
This code is as thread safe as it gets - there are no threads there, no synchronization, no nothing. To get a real answer, I suggest you elaborate on threaded usage here.SergeyA

4 Answers

3
votes

Compiler optimization can not break thread safety!

You might however experience issues with optimizations in code that appeared to be thread safe but really only worked because of pure luck.

If you access data from multiple threads, you must either

  • Protect the appropriate sections using std::mutex or the like.
  • or, use std::atomic.

If not, the compiler might do optimizations that is next to impossible to expect.

I recommend watching CppCon 2014: Herb Sutter "Lock-Free Programming (or, Juggling Razor Blades), Part I" and Part II

0
votes

After answering question in comments, it makes more sense. Let's analyze thread-safety here given that fetch() and getBar() are called from different threads. Several points will need to be considered:

  • 'Dirty reads', or garabage reading due to interrupted write. While a general possibility, does not happen on 3 chip families I am familiar with for aligned ints. Let's discard this possibility for now, and just assume read values are alwats clean.
  • 'Improper reads', or an option of reading something from bar which was never written there. Would it be possible? Optimizing away temp on the compiler part is, in my opinion, possible, but I am no expert in this matter. Let's assume it does not happen. The caveat would still be there - you might NEVER see the new value of bar. Not in a reasonable time, simply never.
0
votes

The compiler can apply any transformation that results in the same observable behavior. Assignments to local non-volatile variables are not part of the observable behavior. The compiler may just decide to eliminate temp completely and just use bar directly. It may also decide that bar will always end up with the value zero, and set at the beginning of the function (at least in your simplified example).

However, as you can read in James' answer on a related question the situation is more complex because modern hardware also optimizes the executed code. This means that the CPU re-orders instructions, and neither the programmer or the compiler has influence on that without using special instructions. You need either a std::atomic, you memory fences explicitly (I wouldn't recommend it because it is quite tricky), or use a mutex which also acts as a memory fence.

-1
votes

It probably wouldn't optimize that way because of the function call in the middle, but you can define temp as volatile, this will tell the compiler not to perform these kinds of optimizations.

Depending on the platform, you can certainly have cases where multibyte quantities are in an inconsistent state. It doesn't even need to be thread related. For example, a device experiencing low voltage during a power brown-out can leave memory in an inconsistent state. If you have pointers getting corrupted, then it's usually bad news.

One way I approached this on a system without mutexes was to ensure every piece of data could be verified. For example, for every datum T, there would be a validation checksum C and a backup U.

A set operation would be as follows:

U = T
T = new value
C = checksum(T)

And a get operation would be as follows:

is checksum(T) == C
    yes: return T
     no: return U

This guarantees that the whatever is returned is in a consistent state. I would apply this algorithm to the entire OS, so for example, entire files could be restored.

If you want to ensure atomicity without getting into complex mutexes and whatever, try to use the smallest types possible. For example, does bar need to be an int or will unsigned char or bool suffice?