Imagine that there is a certain class library .dll written in C #, in which there are 100,500 classes and the same methods / fields in each and it weighs generally cosmic numbers. Imagine that there is a second program - a console C # application that wants to use this dll. dll is connected to the project simply through the link. Further we will provide that in this application there is the following code:

TestClass a = new TestClass(); Console.WriteLine(a.add(10, 10)); Console.WriteLine(a.sub(20, 15)); 

Where TestClass is a class from dll.

Question: how will import from dll? When the process starts, will it take and load this whole huge library into my process? Or will only the classes I use be loaded? Or will it only load the necessary methods?

  • The entire libu will not load exactly. By methods - not in the know. - Monk
  • I suspect that either the disk will simply be a memory of the process. When accessing a page that is not yet uploaded, it is counted from disk (only the page you need, not the whole file). - kmv
  • Judging by the counters from the .NET CLR group Loading (Total Classes Loaded, Bytes in Loader Heap), the classes are loaded as needed. I checked on EF (either 5 MB), after initializing DbContext in the bootloader heap is only ~ 1 MB, and there are 250 classes (there are more than 3000 classes in EF). - kmv

1 answer 1

A .NET DLL can contain several types of data, such as byte code (IL), metadata describing types, metadata describing the assembly, and just some resources, such as icons.

When using the library, metadata and byte code will be loaded from it. With processes in the CLR, everything is somewhat more complicated than in conventional Windows, because it uses the concept of managed code , so the DLLs are loaded not into your process, but into the process of the host application.

But in order to simplify, we can assume that yes, all the code from the DLL will be loaded into your process. There are doubts about its enormity - now our project has looked, in which 55 assemblies, the average build size is 240 kilobytes. For modern computers, where 4-8GB of RAM is now quite the norm on ordinary office machines this is a penny.

If you see a really huge DLL that “weighs” 10 megabytes, then most likely, it contains not only code, but also resources, and they occupy most of the file. You can check this assumption using the ILDASM program, which is included with .NET and allows you to see what the assembly consists of.

Resources are loaded only as needed, although Windows can load everything at once in conditions of excess memory.

  • Dll of this size, I did not find, I just assumed. But in any case, it’s not cool when I use only 1 class from dll, and all 100 classes are loaded into memory? This kind of waste of resources is obtained, is it really not optimized in .NET? - Ivan Smollenko
  • @ user218772 and you do not do such libraries, in which you do not use anything :) - Pavel Mayorov
  • @PavelMayorov, yes, you are right, but all the same, even if there are 2 small classes in the library, and only 1 is used, but it’s still not pleasant to realize that code that doesn't give you anything at all is 100 bytes of memory And writing dll in which only 1 class is, in my opinion, also not an option :) - Ivan Smollenko
  • 3
    @ user218772 Donald Knut, with his characteristic irony, once said that the root of all evils is premature optimization. Look at the dynamics of the price of one megabyte of RAM , now it is 0.35 cents. One third of a cent per megabyte and every 2 years according to Moore's law, it decreases by 2 times. In the existing solution, the loading speed is cardinally higher, because if you need 50 classes out of 100, you will go to the disk 50 times for each class, and this is really very slow. Therefore, when determining the slope, consider priorities. - Mark Shevchenko
  • 2
    @ user218772 You write as if the meaning of the DLL in optimization, but it is not. The concept of modules , libraries, and so on arose at the junction of the 60s and 70s, since the size of the projects became too large for convenient work with them, the number of errors became too large. In the early 1990s, another quantitative leap was achieved, and such notions as level or link appeared . The application is divided into levels and modules in order to make it easier to work on, including a team and several teams. That is the meaning of the modules, not in the economy of millionths of a cent per RAM. - Mark Shevchenko