0
votes

I read this in apple docs:

Important: You should never call the dispatch_sync or dispatch_sync_f function from a task that is executing in the same queue that you are planning to pass to the function. This is particularly important for serial queues, which are guaranteed to deadlock, but should also be avoided for concurrent queues.

The first case, causes a crash, but not the second one, the app works. What's happening exactly in you call sync inside a task that is executed by a concurrent queue?

let concurrentQueue = DispatchQueue(label: "queue2.concurrent", attributes: .concurrent)

    for _ in 0..<10 {
        concurrentQueue.async {
            print(Thread.current)
            concurrentQueue.sync {
                print(Thread.current)
                print(2)
            }
        }
    }

Is that creating a lot of threads? I don't see that in the debugger. And the app does not crash.

1

1 Answers

2
votes

The code snippet you provided does not cause a deadlock.

The for loop will put 10 tasks on the queue and then execution will proceed off the bottom of your snippet. The queue will start those tasks on threads in GCD's thread pool as resources permit, possibly creating new threads if there's more available CPU time than the existing threads can take advantage of. Each task will print its current thread description and then submit the inner task synchronously.

Submitting the inner task synchronously simply means that the outer task won't proceed (to its end) until the inner task has completed.

Now, in theory, the queue could run the inner task on yet another thread from GCD's pool. However, since GCD knows that the calling thread is otherwise blocked, it will in practice run the inner task right there on the calling thread. Because of that, there's always a thread available and nothing stops the inner task from starting immediately.

So, your code behaves very similarly to if the inner task were simply inlined in the outer task:

for _ in 0..<10 {
    concurrentQueue.async {
        print(Thread.current)
        print(Thread.current)
        print(2)
    }
}

The above ignores barrier tasks. Those can block a concurrent queue from executing other tasks even though there are available threads. In other words, it sort of makes a concurrent queue temporarily serial. In that case, it's definitely possible to deadlock.

In any case, if a task submitted to a concurrent queue did deadlock (in any way, not just by synchronously submitting a task to the same queue; for example, by waiting on a semaphore that will never be signaled) then that thread will be forever blocked and unable to proceed. If the task is not a barrier or not being synchronously waited upon by a barrier task, then the queue is not blocked. It could keep starting other (non-barrier) tasks that have been queued on it.

The stuck thread would consume a fixed amount of memory and some kernel bookkeeping data. But it wouldn't consume CPU. However, GCD has an internal limit on the number of threads it will start. If you block enough of them, it will eventually stop executing any asynchronous tasks.