When we talk about atomic variables, such as C++11's atomic<>
, is it lock free? Or is lock-freeness something different? If I manage a queue with atomic variables, will it be slower than a lock-free queue?
2 Answers
The standard does not specify if atomic objects are lock-free. On a platform that doesn't provide lock-free atomic operations for a type T, atomic<T>
objects may be implemented using a mutex, which wouldn't be lock-free. In that case, any containers using these objects in their implementation would not be lock-free either.
The standard does provide a way to check if an atomic<T>
variable is lock-free: you can use var.is_lock_free()
or atomic_is_lock_free(&var)
. These functions are guaranteed to always return the same value for the same type T
on a given program execution. For basic types such as int
, There are also macros provided (e.g. ATOMIC_INT_LOCK_FREE
) which specify if lock-free atomic access to that type is available.
Lock-free usually applies to data structures shared between multiple threads, where the synchronisation mechanism is not mutual exclusion; the intention is that all threads should keep making some kind of progress instead of sleeping on a mutex.
atomic<T>
variables don't use locks (at least where T
is natively atomic on your platform), but they're not lock-free in the sense above. You might use them in the implementation of a lock-free container, but they're not sufficient on their own.
Eg, atomic<queue<T>>
wouldn't suddenly make a normal std::queue
into a lock-free data structure. You could however implement a genuinely lock-free atomic_queue<T>
whose members were atomic
.
Note that even if atomic<int>
is natively atomic and not emulated with a lock on your platform, that does not make it lock-free in any interesting way. Plain int
is already lock-free in this sense: the atomic<>
wrapper gets you explicit control of memory ordering, and access to hardware synchronisation primitives.