My question concerns the topic of organizing C ++ sources. Below are two ways: the usual and alternative. I use the second method in my "hobby" projects. I want to hear the opinion of the experienced whether to use it or not, and why?

  1. Habitual

The set of source codes is divided into two sets: header files and files with implementations. That is, all the familiar *.hpp and *.cpp , *.cc files. The implementation files "include" header files. Each implementation file results in *.obj , which are later used by the linker to get the executable file.

When using this method, very often the header file is located next to the implementation file. Examples: SuperFactory.hpp and next to it lies in the same folder SuperFactory.cpp

  1. Alternative

This method differs from the above in that instead of a large number of implementation files, a single implementation file, implementation.cpp, is created. Yes, one cpp-file. You can do this as follows: *.cpp , *.cc files become header files containing implementations and included in implementation.cpp .

In this method, header files containing ads are in a separate folder, for example include . And the header files contain the implementation in another implementation . Schematically can be represented as:

 include\ ...proba1.hpp ...proba2.hpp implementation\ ...proba1.hpp ...proba2.hpp include.hpp: #include <include/proba1.hpp> #include <include/proba2.hpp> implementation.hpp: #include <implementation/proba1.hpp> #include <implementation/proba2.hpp> implementation.cpp: #include <precompiled_headers.hpp> #include <include.hpp> #include <implementation.hpp> 

As mentioned above, I use the second method in my "hobby" projects. I want to understand what can be the reason for not using the second method, and why?

  • At a minimum - as a result, everything depends on everything, and for any sneaky a complete recompilation is needed - I don’t like it ... - Harry
  • @Harry: Sorry, but I do not understand. Why complete recompilation. For the second, everything is the same, only in one cpp file. I ask you to follow more carefully and if there is a place to be, then I ask you to give a more detailed question. I am sure you have a wealth of experience, so do not bother and share;) - sys_dev
  • Well, right - all in one file. All source code. And compiled for any corrections in the smallest .cpp, in which there is nothing, everything that is in the whole program. Or am I misunderstanding something? ... And, by the way, posting on separate files somewhat disciplines, or something ... Unwillingly you start to think about how to get smaller dependencies, global variables, etc. do not use - and here, well, just some kind of invitation: everyone sees everything. - Harry
  • 1. About compilation is now clear. I agree. 2. By separation on separate files: in the second method just as in the first. It was Proba1.hpp and Proba1.cpp, and it became include / Proba1.hpp and implementation / Proba1.hpp. - sys_dev
  • Well, I didn’t say that this method makes you , he rather provokes :) - Harry

1 answer 1

Since, in fact, you have, in the second case, one translation unit, changing any part leads to its complete reassembly.

Hence the cons:

  1. on large projects build longer. And it will slow down as the project grows.
  2. in the case of C ++ with such an organization, it is quite possible for yourself to get a situation where the compiler does not have enough memory. Especially if there are templates, especially recursive.
  3. encapsulation of the translation unit level may be violated (static variables, anonymous namespaces become accessible to everyone, without control).

In the case of individual .cpp files, each is translated into its own .o file, and then the linker compiles into one executable. As a result: if only one of the implementation files is changed, only its .o file is rebuilt and then the linker rebuilds them with the old ones.

This will not save, however, if the changes were made in .h files and it connects a lot where: all .cpp files in which it is used will be re-assembled (it depends a lot on how the configuration / build system creates dependencies). This, by the way, is a reason not to create super-headers that connect half of your system, as well as try to use at least #include in header files in general. I do not consider the case with precompiler headers.

Well, when the header file and file implementations are in different directories, it is not very convenient to bypass the project with the help of the file manager: it is not necessary to load the IDE to quickly see the project.