365
votes

What is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for both?

For now I found this explanation: http://www.linux-mag.com/id/7411 - but "concurrency is a property of the program" vs "parallel execution is a property of the machine" isn't enough for me - still I can't say what is what.

18

18 Answers

332
votes

If you program is using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.

Here's a visual example. Threads on a non-threaded machine:

        --  --  --
     /              \
>---- --  --  --  -- ---->>

Threads on a threaded machine:

     ------
    /      \
>-------------->>

The dashes represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.

402
votes

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang, F# asynchronous workflows and Scala's Akka library are perhaps the most promising approaches to highly concurrent programming.

Multicore programming is a special case of parallel programming. Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug than concurrent programs. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising approach for high-performance parallel programming on multicores and it has been adopted in both Intel's Threaded Building Blocks and Microsoft's Task Parallel Library (in .NET 4).

157
votes

https://joearms.github.io/published/2013-04-05-concurrent-and-parallel-programming.html

Concurrent = Two queues and one coffee machine.

Parallel = Two queues and two coffee machines.

47
votes

Interpreting the original question as parallel/concurrent computation instead of programming.

In concurrent computation two computations both advance independently of each other. The second computation doesn't have to wait until the first is finished for it to advance. It doesn't state however, the mechanism how this is achieved. In single-core setup, suspending and alternating between threads is required (also called pre-emptive multithreading).

In parallel computation two computations both advance simultaneously - that is literally at the same time. This is not possible with single CPU and requires multi-core setup instead.

Images from article: "Parallel vs Concurrent in Node.js"

suspending and taking turns versus parallel computing

31
votes

In the view from a processor, It can be described by this pic

In the view  from a processor, It can be described by this pic

In the view from a processor, It can be described by this pic

23
votes

I believe concurrent programming refers to multithreaded programming which is about letting your program run multiple threads, abstracted from hardware details.

Parallel programming refers to specifically designing your program algorithms to take advantage of available parallel execution. For example, you can execute in parallel two branches of some algorithms in expectation that it will hit the result sooner (on average) than it would if you first checked the first then the second branch.

14
votes

I found this content in some blog. Thought it is useful and relevant.

Concurrency and parallelism are NOT the same thing. Two tasks T1 and T2 are concurrent if the order in which the two tasks are executed in time is not predetermined,

T1 may be executed and finished before T2, T2 may be executed and finished before T1, T1 and T2 may be executed simultaneously at the same instance of time (parallelism), T1 and T2 may be executed alternatively, ... If two concurrent threads are scheduled by the OS to run on one single-core non-SMT non-CMP processor, you may get concurrency but not parallelism. Parallelism is possible on multi-core, multi-processor or distributed systems.

Concurrency is often referred to as a property of a program, and is a concept more general than parallelism.

Source: https://blogs.oracle.com/yuanlin/entry/concurrency_vs_parallelism_concurrent_programming

13
votes

They're two phrases that describe the same thing from (very slightly) different viewpoints. Parallel programming is describing the situation from the viewpoint of the hardware -- there are at least two processors (possibly within a single physical package) working on a problem in parallel. Concurrent programming is describing things more from the viewpoint of the software -- two or more actions may happen at exactly the same time (concurrently).

The problem here is that people are trying to use the two phrases to draw a clear distinction when none really exists. The reality is that the dividing line they're trying to draw has been fuzzy and indistinct for decades, and has grown ever more indistinct over time.

What they're trying to discuss is the fact that once upon a time, most computers had only a single CPU. When you executed multiple processes (or threads) on that single CPU, the CPU was only really executing one instruction from one of those threads at a time. The appearance of concurrency was an illusion--the CPU switching between executing instructions from different threads quickly enough that to human perception (to which anything less than 100 ms or so looks instantaneous) it looked like it was doing many things at once.

The obvious contrast to this is a computer with multiple CPUs, or a CPU with multiple cores, so the machine is executing instructions from multiple threads and/or processes at exactly the same time; code executing one can't/doesn't have any effect on code executing in the other.

Now the problem: such a clean distinction has almost never existed. Computer designers are actually fairly intelligent, so they noticed a long time ago that (for example) when you needed to read some data from an I/O device such as a disk, it took a long time (in terms of CPU cycles) to finish. Instead of leaving the CPU idle while that happened, they figured out various ways of letting one process/thread make an I/O request, and let code from some other process/thread execute on the CPU while the I/O request completed.

So, long before multi-core CPUs became the norm, we had operations from multiple threads happening in parallel.

That's only the tip of the iceberg though. Decades ago, computers started providing another level of parallelism as well. Again, being fairly intelligent people, computer designers noticed that in a lot of cases, they had instructions that didn't affect each other, so it was possible to execute more than one instruction from the same stream at the same time. One early example that became pretty well known was the Control Data 6600. This was (by a fairly wide margin) the fastest computer on earth when it was introduced in 1964--and much of the same basic architecture remains in use today. It tracked the resources used by each instruction, and had a set of execution units that executed instructions as soon as the resources on which they depended became available, very similar to the design of most recent Intel/AMD processors.

But (as the commercials used to say) wait--that's not all. There's yet another design element to add still further confusion. It's been given quite a few different names (e.g., "Hyperthreading", "SMT", "CMP"), but they all refer to the same basic idea: a CPU that can execute multiple threads simultaneously, using a combination of some resources that are independent for each thread, and some resources that are shared between the threads. In a typical case this is combined with the instruction-level parallelism outlined above. To do that, we have two (or more) sets of architectural registers. Then we have a set of execution units that can execute instructions as soon as the necessary resources become available. These often combine well because the instructions from the separate streams virtually never depend on the same resources.

Then, of course, we get to modern systems with multiple cores. Here things are obvious, right? We have N (somewhere between 2 and 256 or so, at the moment) separate cores, that can all execute instructions at the same time, so we have clear-cut case of real parallelism--executing instructions in one process/thread doesn't affect executing instructions in another.

Well, sort of. Even here we have some independent resources (registers, execution units, at least one level of cache) and some shared resources (typically at least the lowest level of cache, and definitely the memory controllers and bandwidth to memory).

To summarize: the simple scenarios people like to contrast between shared resources and independent resources virtually never happen in real life. With all resources shared, we end up with something like MS-DOS, where we can only run one program at a time, and we have to stop running one before we can run the other at all. With completely independent resources, we have N computers running MS-DOS (without even a network to connect them) with no ability to share anything between them at all (because if we can even share a file, well, that's a shared resource, a violation of the basic premise of nothing being shared).

Every interesting case involves some combination of independent resources and shared resources. Every reasonably modern computer (and a lot that aren't at all modern) has at least some ability to carry out at least a few independent operations simultaneously, and just about anything more sophisticated than MS-DOS has taken advantage of that to at least some degree.

The nice, clean division between "concurrent" and "parallel" that people like to draw just doesn't exist, and almost never has. What people like to classify as "concurrent" usually still involves at least one and often more different types of parallel execution. What they like to classify as "parallel" often involves sharing resources and (for example) one process blocking another's execution while using a resource that's shared between the two.

People trying to draw a clean distinction between "parallel" and "concurrent" are living in a fantasy of computers that never actually existed.

8
votes
  • Concurrent programming is in a general sense to refer to environments in which the tasks we define can occur in any order. One task can occur before or after another, and some or all tasks can be performed at the same time.

  • Parallel programming is to specifically refer to the simultaneous execution of concurrent tasks on different processors. Thus, all parallel programming is concurrent, but not all concurrent programming is parallel.

Source: PThreads Programming - A POSIX Standard for Better Multiprocessing, Buttlar, Farrell, Nichols

6
votes

Parallel programming happens when code is being executed at the same time and each execution is independent of the other. Therefore, there is usually not a preoccupation about shared variables and such because that won't likely happen.

However, concurrent programming consists on code being executed by different processes/threads that share variables and such, therefore on concurrent programming we must establish some sort of rule to decide which process/thread executes first, we want this so that we can be sure there will be consistency and that we can know with certainty what will happen. If there is no control and all threads compute at the same time and store things on the same variables, how would we know what to expect in the end? Maybe a thread is faster than the other, maybe one of the threads even stopped in the middle of its execution and another continued a different computation with a corrupted (not yet fully computed) variable, the possibilities are endless. It's in these situations that we usually use concurrent programming instead of parallel.

6
votes

Classic scheduling of tasks can be serial, parallel or concurrent.

  • Serial: tasks must be executed one after the other in a known tricked order or it will not work. Easy enough.

  • Parallel: tasks must be executed at the same time or it will not work.

    • Any failure of any of the tasks - functionally or in time - will result in total system failure.
    • All tasks must have a common reliable sense of time.

    Try to avoid this or we will have tears by tea time.

  • Concurrent: we do not care. We are not careless, though: we have analysed it and it doesn't matter; we can therefore execute any task using any available facility at any time. Happy days.

Often, the available scheduling changes at known events which we call a state change.

People often think this is about software, but it is in fact a systems design concept that pre-dates computers; software systems were a little slow in the uptake, very few software languages even attempt to address the problem. You might try looking up the transputer language occam if you are interested.

Succinctly, systems design addresses the following:

  • the verb - what you are doing (operation or algorithm)
  • the noun - what you are doing it to (data or interface)
  • when - initiation, schedule, state changes
  • how - serial, parallel, concurrent
  • where - once you know when things happen, you can say where they can happen and not before.
  • why - is this the way to do it? Are there other ways, and more importantly, a better way? What happens if you don't do it?

Good luck.

5
votes

In programming, concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations.
- Andrew Gerrand -

And

Concurrency is the composition of independently executing computations. Concurrency is a way to structure software, particularly as a way to write clean code that interacts well with the real world. It is not parallelism.

Concurrency is not parallelism, although it enables parallelism. If you have only one processor, your program can still be concurrent but it cannot be parallel. On the other hand, a well-written concurrent program might run efficiently in parallel on a multiprocessor. That property could be important...
- Rob Pike -

To understand the difference, I strongly recommend to see this Rob Pike(one of Golang creators)'s video. Concurrency Is Not Parallelism

3
votes

I understood the difference to be:

1) Concurrent - running in tandem using shared resources 2) Parallel - running side by side using different resources

So you can have two things happening at the same time independent of each other, even if they come together at points (2) or two things drawing on the same reserves throughout the operations being executed (1).

3
votes

Although there isn’t complete agreement on the distinction between the terms parallel and concurrent, many authors make the following distinctions:

  • In concurrent computing, a program is one in which multiple tasks can be in progress at any instant.
  • In parallel computing, a program is one in which multiple tasks cooperate closely to solve a problem.

So parallel programs are concurrent, but a program such as a multitasking operating system is also concurrent, even when it is run on a machine with only one core, since multiple tasks can be in progress at any instant.

Source: An introduction to parallel programming, Peter Pacheco

2
votes

Concurrency and Parallelism Source

In a multithreaded process on a single processor, the processor can switch execution resources between threads, resulting in concurrent execution.

In the same multithreaded process in a shared-memory multiprocessor environment, each thread in the process can run on a separate processor at the same time, resulting in parallel execution.

When the process has fewer or as many threads as there are processors, the threads support system in conjunction with the operating environment ensure that each thread runs on a different processor.

For example, in a matrix multiplication that has the same number of threads and processors, each thread (and each processor) computes a row of the result.

0
votes

Different people talk about different kinds of concurrency and parallelism in many different specific cases, so some abstractions to cover their common nature are needed.

The basic abstraction is done in computer science, where both concurrency and parallelism are attributed to the properties of programs. Here, programs are formalized descriptions of computing. Such programs need not to be in any particular language or encoding, which is implementation-specific. The existence of API/ABI/ISA/OS is irrelevant to such level of abstraction. Surely one will need more detailed implementation-specific knowledge (like threading model) to do concrete programming works, the spirit behind the basic abstraction is not changed.

A second important fact is, as general properties, concurrency and parallelism can coexist in many different abstractions.

For the general distinction, see the relevant answer for the basic view of concurrency v. parallelism. (There are also some links containing some additional sources.)

Concurrent programming and parallel programming are techniques to implement such general properties with some systems which expose programmability. The systems are usually programming languages and their implementations.

A programming language may expose the intended properties by built-in semantic rules. In most cases, such rules specify the evaluations of specific language structures (e.g. expressions) making the computation involved effectively concurrent or parallel. (More specifically, the computational effects implied by the evaluations can perfectly reflect these properties.) However, concurrent/parallel language semantics are essentially complex and they are not necessary to practical works (to implement efficient concurrent/parallel algorithms as the solutions of realistic problems). So, most traditional languages take a more conservative and simpler approach: assuming the semantics of evaluation totally sequential and serial, then providing optional primitives to allow some of the computations being concurrent and parallel. These primitives can be keywords or procedural constructs ("functions") supported by the language. They are implemented based on the interaction with hosted environments (OS, or "bare metal" hardware interface), usually opaque (not able to be derived using the language portably) to the language. Thus, in this particular kind of high-level abstractions seen by the programmers, nothing is concurrent/parallel besides these "magic" primitives and programs relying on these primitives; the programmers can then enjoy less error-prone experience of programming when concurrency/parallelism properties are not so interested.

Although primitives abstract the complex away in the most high-level abstractions, the implementations still have the extra complexity not exposed by the language feature. So, some mid-level abstractions are needed. One typical example is threading. Threading allows one or more thread of execution (or simply thread; sometimes it is also called a process, which is not necessarily the concept of a task scheduled in an OS) supported by the language implementation (the runtime). Threads are usually preemptively scheduled by the runtime, so a thread needs to know nothing about other threads. Thus, threads are natural to implement parallelism as long as they share nothing (the critical resources): just decompose computations in different threads, once the underlying implementation allows the overlapping of the computation resources during the execution, it works. Threads are also subject to concurrent accesses of shared resources: just access resources in any order meets the minimal constraints required by the algorithm, and the implementation will eventually determine when to access. In such cases, some synchronization operations may be necessary. Some languages treat threading and synchronization operations as parts of the high-level abstraction and expose them as primitives, while some other languages encourage only relatively more high-level primitives (like futures/promises) instead.

Under the level of language-specific threads, there come multitasking of the underlying hosting environment (typically, an OS). OS-level preemptive multitasking are used to implement (preemptive) multithreading. In some environments like Windows NT, the basic scheduling units (the tasks) are also "threads". To differentiate them with userspace implementation of threads mentioned above, they are called kernel threads, where "kernel" means the kernel of the OS (however, strictly speaking, this is not quite true for Windows NT; the "real" kernel is the NT executive). Kernel threads are not always 1:1 mapped to the userspace threads, although 1:1 mapping often reduces most overhead of mapping. Since kernel threads are heavyweight (involving system calls) to create/destroy/communicate, there are non 1:1 green threads in the userspace to overcome the overhead problems at the cost of the mapping overhead. The choice of mapping depending on the programming paradigm expected in the high-level abstraction. For example, when a huge number of userspace threads expected being concurrently executed (like Erlang), 1:1 mapping is never feasible.

The underlying of OS multitasking is ISA-level multitasking provided by the logical core of the processor. This is usually the most low-level public interface for programmers. Beneath this level, there may exist SMT. This is a form of more low-level multithreading implemented by the hardware, but arguably, still somewhat programmable - though it is usually only accessible by the processor manufacturer. Note the hardware design is apparently reflecting parallelism, but there is also concurrent scheduling mechanism to make the internal hardware resources being efficiently used.

In each level of "threading" mentioned above, both concurrency and parallelism are involved. Although the programming interfaces vary dramatically, all of them are subject to the properties revealed by the basic abstraction at the very beginning.

0
votes

Just sharing an example that helps to highlight the distinction:

Parallel Programming: Say you want to implement the merge-sort algorithm. Each time that you divide the problem into two sub-problems, you can have two threads that solve them. However, in order to do the merge step you have to wait for these two threads to finish since merging requires both sub-solutions. This "mandatory wait" makes this a parallel program.

Concurrent Program: Say you want to compress n text files and generate a compressed file for each of them. You can have from 2 (up to n) threads that each handle compressing a subset of the files. When each thread is done, it's just done, it doesn't have to wait or do anything else. So, since different tasks are performed in an interleaved manner in "any arbitrary order" the program is concurrent but not parallel.

As someone else mentioned, every parallel program is concurrent (has to be in fact), but not the other way around.

0
votes

I will try to explain it in my own style, it might not be in computer terms but it gives you the general idea.

Let's take an example, say Household chores: cleaning dishes, taking out trash, mowing the lawn etc, also we have 3 people(threads) A, B, C to do them

Concurrent: The three individuals start different tasks independently i.e.,

A --> cleaning dishes
B --> taking out trash 
C --> mowing the lawn 

Here, the order of tasks are indeterministic and responses depends on the amount of work

Parallel: Here if we want to improve the throughput we can assign multiple people to the single task, for example, cleaning dishes we assign two people, A soaping the dishes and B washing the dishes which might improve the throughput.

cleaning the dishes:

A --> soaping the dishes
B --> washing the dishes

so on

Hope this gives an idea! now move on to the technical terms which are explained in the other answers ;)