181
votes

Erlang's Characteristics

From Erlang Programming (2009):

Erlang concurrency is fast and scalable. Its processes are lightweight in that the Erlang virtual machine does not create an OS thread for every created process. They are created, scheduled, and handled in the VM, independent of underlying operating system. As a result, process creation time is of the order of microseconds and independent of the number of concurrently existing processes. Compare this with Java and C#, where for every process an underlying OS thread is created: you will get some very competitive comparisons, with Erlang greatly outperforming both languages.

From Concurrency oriented programming in Erlang (pdf) (slides) (2003):

We observe that the time taken to create an Erlang process is constant 1µs up to 2,500 processes; thereafter it increases to about 3µs for up to 30,000 processes. The performance of Java and C# is shown at the top of the figure. For a small number of processes it takes about 300µs to create a process. Creating more than two thousand processes is impossible.

We see that for up to 30,000 processes the time to send a message between two Erlang processes is about 0.8µs. For C# it takes about 50µs per message, up to the maximum number of processes (which was about 1800 processes). Java was even worse, for up to 100 process it took about 50µs per message thereafter it increased rapidly to 10ms per message when there were about 1000 Java processes.

My thoughts

I don't fully understand technically why Erlang processes are so much more efficient in spawning new processes and have much smaller memory footprints per process. Both the OS and Erlang VM have to do scheduling, context switches, and keep track of the values in the registers and so on...

Simply why aren't OS threads implemented in the same way as processes in Erlang? Do they have to support something more? And why do they need a bigger memory footprint? And why do they have slower spawning and communication?

Technically, why are processes in Erlang more efficient than OS threads when it comes to spawning and communication? And why can't threads in the OS be implemented and managed in the same efficient way? And why do OS threads have a bigger memory footprint, plus slower spawning and communication?

More reading

7
Before attempting to understand the reason why a hypothesis is true, you need to establish whether the hypothesis is true -- e.g., supported by the evidence. Do you have references for any like-for-like comparisons demonstrating that an Erlang process actually is more efficient than (say) a Java thread on an up-to-date JVM? Or a C app using OS process and thread support directly? (The latter seems very, very unlikely to me. The former only somewhat likely.) I mean, with a limited enough environment (Francisco's point), it might be true, but I'd want to see the numbers.T.J. Crowder
@Donal: As is the case with so many other absolute statements. :-)T.J. Crowder
@Jonas: Thanks, but I got as far as the date (1998-11-02) and JVM version (1.1.6) and stopped. Sun's JVM has improved a fair bit in the last 11.5 years (and presumably Erlang's interpreter has as well), particularly in the area of threading. (Just to be clear, I'm not saying that the hypothesis isn't true [and Francisco and Donal have pointed out why Erland may be able to do something there]; I'm saying it shouldn't be taken at face value without being checked.)T.J. Crowder
@Jonas: "...but I guess you can do it in Erlang..." It's that "guess" part, dude. :-) You're guessing that Erlang's process switching scales up past the thousands. You're guessing that it does so better than Java or OS threads. Guessing and software dev are not a great combination. :-) But I think I've made my point.T.J. Crowder
@T.J. Crowder: Install erlang and run erl +P 1000100 +hms 100 and than type {_, PIDs} = timer:tc(lists,map,[fun(_)->spawn(fun()->receive stop -> ok end end) end, lists:seq(1,1000000)]). and than wait about three minutes for result. That's so simple. It takes 140us per process and 1GB whole RAM on mine laptop. But it is directly form shell, it should be better from compiled code.Hynek -Pichi- Vychodil

7 Answers

117
votes

There are several contributing factors:

  1. Erlang processes are not OS processes. They are implemented by the Erlang VM using a lightweight cooperative threading model (preemptive at the Erlang level, but under the control of a cooperatively scheduled runtime). This means that it is much cheaper to switch context, because they only switch at known, controlled points and therefore don't have to save the entire CPU state (normal, SSE and FPU registers, address space mapping, etc.).
  2. Erlang processes use dynamically allocated stacks, which start very small and grow as necessary. This permits the spawning of many thousands — even millions — of Erlang processes without sucking up all available RAM.
  3. Erlang used to be single-threaded, meaning that there was no requirement to ensure thread-safety between processes. It now supports SMP, but the interaction between Erlang processes on the same scheduler/core is still very lightweight (there are separate run queues per core).
79
votes

After some more research I found a presentation by Joe Armstrong.

From Erlang - software for a concurrent world (presentation) (at 13 min):

[Erlang] is a concurrent language – by that I mean that threads are part of the programming language, they do not belong to the operating system. That's really what's wrong with programming languages like Java and C++. It's threads aren't in the programming language, threads are something in the operating system – and they inherit all the problems that they have in the operating system. One of the problems is granularity of the memory management system. The memory management in the operating system protects whole pages of memory, so the smallest size that a thread can be is the smallest size of a page. That's actually too big.

If you add more memory to your machine – you have the same number of bits that protects the memory so the granularity of the page tables goes upyou end up using say 64kB for a process you know running in a few hundred bytes.

I think it answers if not all, at least a few of my questions

48
votes

I've implemented coroutines in assembler, and measured performance.

Switching between coroutines, a.k.a. Erlang processes, takes about 16 instructions and 20 nanoseconds on a modern processor. Also, you often know the process you are switching to (example: a process receiving a message in its queue can be implemented as straight hand-off from the calling process to the receiving process) so the scheduler doesn't come into play, making it an O(1) operation.

To switch OS threads, it takes about 500-1000 nanoseconds, because you're calling down to the kernel. The OS thread scheduler might run in O(log(n)) or O(log(log(n))) time, which will start to be noticeable if you have tens of thousands, or even millions of threads.

Therefore, Erlang processes are faster and scale better because both the fundamental operation of switching is faster, and the scheduler runs less often.

38
votes

Erlang processes correspond (approximately) to green threads in other languages; there's no OS-enforced separation between the processes. (There may well be language-enforced separation, but that's a lesser protection despite Erlang doing a better job than most.) Because they're so much lighter-weight, they can be used far more extensively.

OS threads on the other hand are able to be simply scheduled on different CPU cores, and are (mostly) able to support independent CPU-bound processing. OS processes are like OS threads, but with much stronger OS-enforced separation. The price of these capabilities is that OS threads and (even more so) processes are more expensive.


Another way to understand the difference is this. Supposing you were going to write an implementation of Erlang on top of the JVM (not a particularly crazy suggestion) then you'd make each Erlang process be an object with some state. You'd then have a pool of Thread instances (typically sized according to the number of cores in your host system; that's a tunable parameter in real Erlang runtimes BTW) which run the Erlang processes. In turn, that will distribute the work that is to be done across the real system resources available. It's a pretty neat way of doing things, but relies utterly on the fact that each individual Erlang process doesn't do very much. That's OK of course; Erlang is structured to not require those individual processes to be heavyweight since it is the overall ensemble of them which execute the program.

In many ways, the real problem is one of terminology. The things that Erlang calls processes (and which correspond strongly to the same concept in CSP, CCS, and particularly the π-calculus) are simply not the same as the things that languages with a C heritage (including C++, Java, C#, and many others) call a process or a thread. There are some similarities (all involve some notion of concurrent execution) but there's definitely no equivalence. So be careful when someone says “process” to you; they might understand it to mean something utterly different…

4
votes

I think Jonas wanted some numbers on comparing OS threads to Erlang processes. The author of Programming Erlang, Joe Armstrong, a while back tested the scalability of the spawning of Erlang processes to OS threads. He wrote a simple web server in Erlang and tested it against multi-threaded Apache (since Apache uses OS threads). There's an old website with the data dating back to 1998. I've managed only to find that site exactly once. So I can't supply a link. But the information is out there. The main point of the study showed that Apache maxed out just under 8K processes, while his hand written Erlang server handled 10K+ processes.

1
votes

Because Erlang interpreter has only to worry about itself, the OS has many other things to worry about.

-2
votes

one of the reason is erlang process is created not in the OS, but in the evm(erlang virtual machine), so the cost is smaller.