4
votes

I read the Apple documentation on GCD queues and started to wonder what happens if I lets say modify an instance member of type NSMutableArray which is not thread safe in a serial queue? The serial queue would guarantee me that I execute the operations serially, but I still feel that I need to either do an @syncrhonized block or other technique to force a memory barrier, since as far as I understand the tasks on my serial queue can be invoked on different threads. Is that correct? Here is a simple example:

@interface Foo : NSObject

-(void)addNumber:(NSNumber*)number;
-(void)printNumbers;
-(void)clearNumbers;

@end

@implementation Foo
{
   dispatch_queue_t _queue;
   NSMutableArray<NSNumber*>* _numbers;
}

-(instancetype)init
{
   if (self = [super init])
   {
       _queue = dispatch_queue_create(NULL, NULL);
       _numbers = [NSMutableArray array];
   }
   return self;
}

-(void)addNumber:(NSNumber*)number
{
   dispatch_async(_queue,
   ^{
       [_numbers addObject:number];
   });
}

-(void)printNumbers
{
   dispatch_async(_queue,
   ^{
       for (NSNumber* number in _numbers)
       {
           NSLog(@“%@“, number);
       }
   });
}

-(void)clearNumbers
{
   dispatch_async(_queue,
   ^{
       _numbers = [NSMutableArray array];
   });
}
@end;

As far as I understand I could run into memory issues here if I call the member methods from arbitrary threads? Or GCD gives some guarantees under the hood, why I do not need to force memory barriers? Looking at the examples I did not find such constructs anywhere, but coming from C++ it would make sense to touch the member variable under a lock.

2
It's one of the common reasons why we use GCD queues, to eliminate locks from our code. As an aside, you can alternatively use the reader-writer pattern with concurrent queue (see pattern described in latter part of this video), mutate with barrier, but read without barrier. gist.github.com/robertmryan/e1f811c246db4e3ede2fdb0a1fb88da8. In high contention environments, it can offer even better performance than GCD serial queue. - Rob
FYI, if you're looking for reference, see Concurrency Programming Guide, which says "Avoid using locks. The support provided by dispatch queues and operation queues makes locks unnecessary in most situations. Instead of using locks to protect some shared resource, designate a serial queue (or use operation object dependencies) to execute tasks in the correct order." - Rob
In addition to that, in the cases where you do need locks, never, ever use the NS ones, because their performance characteristics are terrible. Dispatch semaphores are a much better alternative if you can't use a queue for whatever reason, and a pthread mutex can be used if you need a condition lock. - Charles Srstka

2 Answers

5
votes

If your queue is a serial queue, it will only allow one operation at a time, no matter which thread it's running on. Therefore, if every access to a resource occurs on the queue, there's no need to further protect that resource with a lock or a semaphore. In fact, it's possible to use dispatch queues as a locking mechanism, and for some applications, it can work quite well.

Now if your queue is a concurrent queue, then that's a different story, since multiple operations can run at the same time on a concurrent queue. However, GCD provides the dispatch_barrier_sync and dispatch_barrier_async APIs. Operations that you start via these two function calls will cause the queue to wait until all other operations finish before executing your block, and then disallow any more operations from running until the block is finished. In this way, it can temporarily make the queue behave like a serial queue, allowing even a concurrent queue to be used as a sort of locking mechanism (for example, allowing reads to a resource via a normal dispatch_sync call, but doing writes via a dispatch_barrier_async. If the reads occur very frequently and the writes very infrequently, this can perform pretty well).

2
votes

The serial queue is a data lock, so no further locking / synchronization is needed, at least as far as this code is concerned. The fact that the same queue may be executed using different threads is an implementation detail about which you should not be thinking; queues are the coin of the realm.

There may, of course, be issues in regard to sharing the array between this queue and the main queue, but that's a different matter.