575
votes

Compiling a C++ file takes a very long time when compared to C# and Java. It takes significantly longer to compile a C++ file than it would to run a normal size Python script. I'm currently using VC++ but it's the same with any compiler. Why is this?

The two reasons I could think of were loading header files and running the preprocessor, but that doesn't seem like it should explain why it takes so long.

15
VC++ supports precompiled headers. Using them will help. A lot.Brian
Yes in my case (mostly C with a few classes - no templates) precompiled headers speed up about 10xLothar
You can use pre compiled headers and C++ code. When you compile your code, it will only update files with changes. Doing so takes very little time. Recompiling an entire project can take 10 times more time.Evan Carslake
It takes significantly longer to compile a C++ file - do you mean 2 seconds compared to 1 second? Certainly that is twice as long, but hardly significant. Or do you mean 10 minutes compared to 5 seconds? Please quantify.Nick Gammon
I put my bet on modules; I don't expect C++ projects becomes faster to build than on other programming language do just with modules, but it can get really close for most of the projects with some management. I hope to see a good package manager with artifactory integration after the modulesAbdurrahim

15 Answers

845
votes

Several reasons

Header files

Every single compilation unit requires hundreds or even thousands of headers to be (1) loaded and (2) compiled. Every one of them typically has to be recompiled for every compilation unit, because the preprocessor ensures that the result of compiling a header might vary between every compilation unit. (A macro may be defined in one compilation unit which changes the content of the header).

This is probably the main reason, as it requires huge amounts of code to be compiled for every compilation unit, and additionally, every header has to be compiled multiple times (once for every compilation unit that includes it).

Linking

Once compiled, all the object files have to be linked together. This is basically a monolithic process that can't very well be parallelized, and has to process your entire project.

Parsing

The syntax is extremely complicated to parse, depends heavily on context, and is very hard to disambiguate. This takes a lot of time.

Templates

In C#, List<T> is the only type that is compiled, no matter how many instantiations of List you have in your program. In C++, vector<int> is a completely separate type from vector<float>, and each one will have to be compiled separately.

Add to this that templates make up a full Turing-complete "sub-language" that the compiler has to interpret, and this can become ridiculously complicated. Even relatively simple template metaprogramming code can define recursive templates that create dozens and dozens of template instantiations. Templates may also result in extremely complex types, with ridiculously long names, adding a lot of extra work to the linker. (It has to compare a lot of symbol names, and if these names can grow into many thousand characters, that can become fairly expensive).

And of course, they exacerbate the problems with header files, because templates generally have to be defined in headers, which means far more code has to be parsed and compiled for every compilation unit. In plain C code, a header typically only contains forward declarations, but very little actual code. In C++, it is not uncommon for almost all the code to reside in header files.

Optimization

C++ allows for some very dramatic optimizations. C# or Java don't allow classes to be completely eliminated (they have to be there for reflection purposes), but even a simple C++ template metaprogram can easily generate dozens or hundreds of classes, all of which are inlined and eliminated again in the optimization phase.

Moreover, a C++ program must be fully optimized by the compiler. A C# program can rely on the JIT compiler to perform additional optimizations at load-time, C++ doesn't get any such "second chances". What the compiler generates is as optimized as it's going to get.

Machine

C++ is compiled to machine code which may be somewhat more complicated than the bytecode Java or .NET use (especially in the case of x86). (This is mentioned out of completeness only because it was mentioned in comments and such. In practice, this step is unlikely to take more than a tiny fraction of the total compilation time).

Conclusion

Most of these factors are shared by C code, which actually compiles fairly efficiently. The parsing step is a lot more complicated in C++, and can take up significantly more time, but the main offender is probably templates. They're useful, and make C++ a far more powerful language, but they also take their toll in terms of compilation speed.

48
votes

Parsing and code generation are actually rather fast. The real problem is opening and closing files. Remember, even with include guards, the compiler still have open the .H file, and read each line (and then ignore it).

A friend once (while bored at work), took his company's application and put everything -- all source and header files-- into one big file. Compile time dropped from 3 hours to 7 minutes.

45
votes

The slowdown is not necessarily the same with any compiler.

I haven't used Delphi or Kylix but back in the MS-DOS days, a Turbo Pascal program would compile almost instantaneously, while the equivalent Turbo C++ program would just crawl.

The two main differences were a very strong module system and a syntax that allowed single-pass compilation.

It's certainly possible that compilation speed just hasn't been a priority for C++ compiler developers, but there are also some inherent complications in the C/C++ syntax that make it more difficult to process. (I'm not an expert on C, but Walter Bright is, and after building various commercial C/C++ compilers, he created the D language. One of his changes was to enforce a context-free grammar to make the language easier to parse.)

Also, you'll notice that generally Makefiles are set up so that every file is compiled separately in C, so if 10 source files all use the same include file, that include file is processed 10 times.

18
votes

Another reason is the use of the C pre-processor for locating declarations. Even with header guards, .h still have to be parsed over and over, every time they're included. Some compilers support pre-compiled headers that can help with this, but they are not always used.

See also: C++ Frequently Questioned Answers

18
votes

C++ is compiled into machine code. So you have the pre-processor, the compiler, the optimizer, and finally the assembler, all of which have to run.

Java and C# are compiled into byte-code/IL, and the Java virtual machine/.NET Framework execute (or JIT compile into machine code) prior to execution.

Python is an interpreted language that is also compiled into byte-code.

I'm sure there are other reasons for this as well, but in general, not having to compile to native machine language saves time.

15
votes

Building C/C++: what really happens and why does it take so long

A relatively large portion of software development time is not spent on writing, running, debugging or even designing code, but waiting for it to finish compiling. In order to make things fast, we first have to understand what is happening when C/C++ software is compiled. The steps are roughly as follows:

  • Configuration
  • Build tool startup
  • Dependency checking
  • Compilation
  • Linking

We will now look at each step in more detail focusing on how they can be made faster.

Configuration

This is the first step when starting to build. Usually means running a configure script or CMake, Gyp, SCons or some other tool. This can take anything from one second to several minutes for very large Autotools-based configure scripts.

This step happens relatively rarely. It only needs to be run when changing configurations or changing the build configuration. Short of changing build systems, there is not much to be done to make this step faster.

Build tool startup

This is what happens when you run make or click on the build icon on an IDE (which is usually an alias for make). The build tool binary starts and reads its configuration files as well as the build configuration, which are usually the same thing.

Depending on build complexity and size, this can take anywhere from a fraction of a second to several seconds. By itself this would not be so bad. Unfortunately most make-based build systems cause make to be invocated tens to hundreds of times for every single build. Usually this is caused by recursive use of make (which is bad).

It should be noted that the reason Make is so slow is not an implementation bug. The syntax of Makefiles has some quirks that make a really fast implementation all but impossible. This problem is even more noticeable when combined with the next step.

Dependency checking

Once the build tool has read its configuration, it has to determine what files have changed and which ones need to be recompiled. The configuration files contain a directed acyclic graph describing the build dependencies. This graph is usually built during the configure step. Build tool startup time and the dependency scanner are run on every single build. Their combined runtime determines the lower bound on the edit-compile-debug cycle. For small projects this time is usually a few seconds or so. This is tolerable. There are alternatives to Make. The fastest of them is Ninja, which was built by Google engineers for Chromium. If you are using CMake or Gyp to build, just switch to their Ninja backends. You don’t have to change anything in the build files themselves, just enjoy the speed boost. Ninja is not packaged on most distributions, though, so you might have to install it yourself.

Compilation

At this point we finally invoke the compiler. Cutting some corners, here are the approximate steps taken.

  • Merging includes
  • Parsing the code
  • Code generation/optimization

Contrary to popular belief, compiling C++ is not actually all that slow. The STL is slow and most build tools used to compile C++ are slow. However there are faster tools and ways to mitigate the slow parts of the language.

Using them takes a bit of elbow grease, but the benefits are undeniable. Faster build times lead to happier developers, more agility and, eventually, better code.

15
votes

The biggest issues are:

1) The infinite header reparsing. Already mentioned. Mitigations (like #pragma once) usually only work per compilation unit, not per build.

2) The fact that the toolchain is often separated into multiple binaries (make, preprocessor, compiler, assembler, archiver, impdef, linker, and dlltool in extreme cases) that all have to reinitialize and reload all state all the time for each invocation (compiler, assembler) or every couple of files (archiver, linker, and dlltool).

See also this discussion on comp.compilers: http://compilers.iecc.com/comparch/article/03-11-078 specially this one:

http://compilers.iecc.com/comparch/article/02-07-128

Note that John, the moderator of comp.compilers seems to agree, and that this means it should be possible to achieve similar speeds for C too, if one integrates the toolchain fully and implements precompiled headers. Many commercial C compilers do this to some degree.

Note that the Unix model of factoring everything out to a separate binary is a kind of the worst case model for Windows (with its slow process creation). It is very noticable when comparing GCC build times between Windows and *nix, especially if the make/configure system also calls some programs just to obtain information.

11
votes

A compiled language is always going to require a bigger initial overhead than an interpreted language. In addition, perhaps you didn't structure your C++ code very well. For example:

#include "BigClass.h"

class SmallClass
{
   BigClass m_bigClass;
}

Compiles a lot slower than:

class BigClass;

class SmallClass
{
   BigClass* m_bigClass;
}
9
votes

Some reasons are:

1) C++ grammar is more complex than C# or Java and takes more time to parse.

2) (More important) C++ compiler produces machine code and does all optimizations during compilation. C# and Java go just half way and leave these steps to JIT.

8
votes

An easy way to reduce compilation time in larger C++ projects is to make a *.cpp include file that includes all the cpp files in your project and compile that. This reduces the header explosion problem to once. The advantage of this is that compilation errors will still reference the correct file.

For example, assume you have a.cpp, b.cpp and c.cpp.. create a file: everything.cpp:

#include "a.cpp"
#include "b.cpp"
#include "c.cpp"

Then compile the project by just making everything.cpp

6
votes

The trade off you are getting is that the program runs a wee bit faster. That may be a cold comfort to you during development, but it could matter a great deal once development is complete, and the program is just being run by users.

5
votes

Most answers are being a bit unclear in mentioning that C# will always run slower due to the cost of performing actions that in C++ are performed only once at compile time, this performance cost is also impacted due runtime dependencies (more things to load to be able to run), not to mention that C# programs will always have higher memory footprint, all resulting in performance being more closely related to the capability of hardware available. The same is true to other languages that are interpreted or depend on a VM.

5
votes

There are two issues I can think of that might be affecting the speed at which your programs in C++ are compiling.

POSSIBLE ISSUE #1 - COMPILING THE HEADER: (This may or may not have already been addressed by another answer or comment.) Microsoft Visual C++ (A.K.A. VC++) supports precompiled headers, which I highly recommend. When you create a new project and select the type of program you are making, a setup wizard window should appear on your screen. If you hit the “Next >” button at the bottom of it, the window will take you to a page that has several lists of features; make sure that the box next to the “Precompiled header” option is checked. (NOTE: This has been my experience with Win32 console applications in C++, but this may not be the case with all kinds of programs in C++.)

POSSIBLE ISSUE #2 - THE LOCATION BEING COMPILED TO: This summer, I took a programming course, and we had to store all of our projects on 8GB flash drives, as the computers in the lab we were using got wiped every night at midnight, which would have erased all of our work. If you are compiling to an external storage device for the sake of portability/security/etc., it can take a very long time (even with the precompiled headers that I mentioned above) for your program to compile, especially if it’s a fairly large program. My advice for you in this case would be to create and compile programs on the hard drive of the computer you’re using, and whenever you want/need to stop working on your project(s) for whatever reason, transfer them to your external storage device, and then click the “Safely Remove Hardware and Eject Media” icon, which should appear as a small flash drive behind a little green circle with a white check mark on it, to disconnect it.

I hope this helps you; let me know if it does! :)

1
votes

In large object-oriented projects, the significant reason is that C++ makes it hard to confine dependencies.

Private functions need to be listed in their respective class' public header, which makes dependencies more transitive (contagious) than they need to be:

// Ugly private dependencies
#include <map>
#include <list>
#include <chrono>
#include <stdio.h>
#include <Internal/SecretArea.h>
#include <ThirdParty/GodObjectFactory.h>

class ICantHelpButShowMyPrivatePartsSorry
{
public:
    int facade(int);

private:
    std::map<int, int> implementation_detail_1(std::list<int>);
    std::chrono::years implementation_detail_2(FILE*);
    Intern::SecretArea implementation_detail_3(const GodObjectFactory&);
};

If this pattern is blissfully repeated into dependency trees of headers, this tends to create a few "god headers" that indirectly include large portions of all headers in a project. They are as all-knowing as god objects, except that this is not apparent until you draw their inclusion trees.

This adds to compilation time in 2 ways:

  1. The amount of code they add to each compilation unit (.cpp file) that includes them is easily many times more than the cpp files themselves. To put this in perspective, catch2.hpp is 18000 lines, whereas most people (even IDEs) start to struggle editing files larger than 1000-10000 lines.
  2. The number of files that have to be recompiled when a header is edited is not contained to the true set of files that depend on it.

Yes, there are mitigations, like forward declaration, which has perceived downsides, or the pimpl idiom, which is a nonzero cost abstraction. Even though C++ is limitless in what you can do, your peers will be wondering what you have been smoking if you go too far from how it's meant to be.

The worst part: If you think about it, the need to declare private functions in their public header is not even necessary: The moral equivalent of member functions can be, and commonly is, mimicked in C, which does not recreate this problem.

1
votes

To answer this question simply, C++ is a much more complex language than other languages available on the market. It has a legacy inclusion model that parses code multiple times, and its templated libraries are not optimized for compilation speed.

Grammar and ADL

Let's have a look at the grammatical complexity of C++ by considering a very simple example:

x*y;

While you’d be likely to say that the above is an expression with multiplication, this is not necessarily the case in C++. If x is a type, then the statement is, in fact, a pointer declaration. This means that C++ grammar is context-sensitive.

Here’s another example:

foo<x> a;

Again, you might think this is a declaration of the variable "a" of type foo, but it could also be interpreted as:

(foo < x) > a;

which would make it a comparison expression.

C++ has a feature called Argument Dependent Lookup (ADL). ADL establishes the rules that govern how the compiler looks up a name. Consider the following example:

namespace A{
  struct Aa{}; 
  void foo(Aa arg);
}
namespace B{
  struct Bb{};
  void foo(A::Aa arg, Bb arg2);
}
namespace C{ 
  struct Cc{}; 
  void foo(A::Aa arg, B::Bb arg2, C::Cc arg3);
}

foo(A::Aa{}, B::Bb{}, C::Cc{});

ADL rules state that we will be looking for the name "foo" considering all arguments of the function call. In this case, all of the functions named “foo” will be considered for overload resolution. This process might take time, especially if there are lots of function overloads. In a templated context, ADL rules become even more complicated.

#include

This command is something that might significantly influence compilation times. Depending on the type of file you include, the preprocessor might copy only a couple of lines of code, or it might copy thousands.

Furthermore, this command cannot be optimized by the compiler. You can copy different pieces of code that can be modified just before inclusion if the header file depends on macros.

There are some solutions to these issues. You can use precompiled headers, which are the compiler's internal representation of what was parsed in the header. This can’t be done without the user's effort, however, because precompiled headers assume that headers are not macro dependent.

The modules feature provides a language-level solution to this problem. It’s available from the C++20 release onward.

Templates

The compilation speed for templates is challenging. Each translation unit that uses templates needs to have them included, and the definitions of these templates need to be available. Some instantiations of templates end up in instantiations of other templates. In some extreme cases, template instantiation can consume lots of resources. A library that uses templates and that was not designed for compilation speed can become troublesome, as you can see in a comparison of metaprogramming libraries provided at this link: http://metaben.ch/. Their differences in compilation speed are significant.

If you want to understand why some metaprogramming libraries are better for compilation times than others, check out this video about the Rule of Chiel.

Conclusion

C++ is a slowly compiled language because compilation performance was not the highest priority when the language was initially developed. As a result, C++ ended up with features that might be effective during runtime, but are not necessarily effective during compile time.

P.S – I work at Incredibuild, a software development acceleration company specializing in accelerating C++ compilations, you are welcome to try it for free.