In connection with this issue ...

I wondered how to actually work with the memory in the DLL, and, in particular, what remains with the dynamic memory after the release of the DLL.

I created a simple DLL, in it the function make_array , which creates an array in dynamic memory. VC ++, static linking.

Then scoff like this:

 dll = LoadLibrary("Dll.dll"); memfunc m = (memfunc)GetProcAddress(dll,"make_array"); int * a = m(5); for(int i = 0; i < 5; ++i) cout << a[i] << " "; cout << endl; FreeLibrary(dll); for(int i = 0; i < 5; ++i) cout << (a[i] *= a[i]) << " "; cout << endl; dll = LoadLibrary("Dll.dll"); m = (memfunc)GetProcAddress(dll,"make_array"); for(int i = 0; i < 5; ++i) cout << (a[i] *= a[i]) << " "; cout << endl; delete[] a; a = m(6); for(int i = 0; i < 5; ++i) cout << a[i] << " "; cout << endl; 

Well, i.e. I checked whether it is possible to delete memory in the program, and whether the memory remains available after the DLL is unloaded. So, if created by one compiler, then everything is normal, is the memory manager smart enough?

I did the same with Open Watcom - the result is the same.

But if the DLL is from OW, and the program is from VC ++, then everything works except delete[] . As, by itself, and was to be expected - due to different memory controllers.

No, I understand that where the memory is allocated - there and release :)

But I just can not answer two questions.

If the memory is allocated by the memory manager in the DLL, then what mechanism is used so that the two dispatchers do not quarrel? For the controller in the DLL is allocated some kind of memory? but when unloading a DLL, as I understand it, this area is not freed?

How to really work with things like smart pointers? Type, in DLL, call make_shared , the result of which is then used in the program - how to guarantee the last deletion in the DLL? All that comes up - too artificial. Unless to cause from the DLL?

PS Well, I did not work seriously with the DLL, I did not dig deep ...

3 answers 3

The first thing you need to know is runtime. In the case of the studio, there is MD and MT. In the first case, a common runtime is used. And if you create a class in one and delete it in another, everything will work as expected. If everyone has their own runtime, it's harder. If created in one place and deleted in another, it can work (if the memory manager is smart enough), it can work, but then it will fall (if the program and the program use the same runtime), or it can fall right away (if the runtime is different).

How does the application work with memory? Usually, the memory manager (which is part of the runtime) requests a large chunk of memory (4MB or even 64MB) about the operating system and already “cuts” inside it for small objects that are created using new / malloc. When the application is unloaded, this memory is given back to the operating system.

but when unloading a DLL, as I understand it, this area is not freed?

if runtime is dynamic, then the memory manager is shared. And the application itself manages this memory.

If static runtime is used, then everything is up to the programmer and runtime. Dll knows that it is unloaded and " must " free memory.

How to really work with things like smart pointers?

smart pointers in this case are good helpers. You just need to make sure that they have the "custom deleter" correctly registered (I don’t even know how it is in Russian - is it a custom remover? Or a custom deletion function?). In this case, the correct function will be called in the correct memory manager (in any case, the programmer has the opportunity to break everything).

But if the DLL is from OW, and the program is from VC ++, then everything works except delete []. As, by itself, and was to be expected - due to different memory controllers.

here the situation is tricky. when delete[] is called, the memory manager must know how much to delete the memory. And somewhere this quantity needs to be kept. And I know for sure that the studio and gcc use different, incompatible models. Therefore, if such a pointer is passed between dll with different managers, there can be absolutely anything.

  • That's the thing that I compiled as /MT , with runtime static linking. - Harry

See it.

The problem is that C ++ does not have a binary standard. This means that when compiling with different compilers, the same class (even if compiled with the same compiler with different flags) can have a different binary layout. There may be a different representation of the virtual method table, different decoration of method names, and just a different physical size. Therefore, the transfer between components compiled by different compilers or different versions of the same compiler, something more complicated than simple structures, can naturally lead to problems and trampling on someone else's memory. And in difficult cases it makes sense to specify the #pragma pack .

This explains why extern "C" is often found in header files exported from DLLs.

If the DLL is guaranteed to be compiled with the same version of the compiler with the same keys, in this case you can transfer objects. But this immediately introduces problems with versioning: updating the component returns the problem.


This explains why it is impossible to transfer objects by smart pointer, if in a project not all DLLs are compiled with the same version of the compiler. (And also why system DLLs do not use smart pointers, exceptions, and other C ++ features.)


Further, on the problem of memory ownership. This part is Microsoft-specific. DLL compilation has modes with static and dynamic composition of the runtime library.

In the case of a static runtime layout, you have your own implementation of new and delete in each DLL, they know nothing about each other, and have disjoint heapes. This means that you must ensure that the object is deployed with the same runtime that allocated it.

In the case of dynamic layout runtime, you have one runtime per application - but only in the case when your application is compiled by the same version of the studio! In this good case, you can allocate an object in one DLL, and deploy it in another. In the bad case, when you have different DLLs compiled by different versions of the studio, they refer to different versions of the runtime library, in which case you will have one runtime in the running program for each version of the studio from which its DLLs were compiled. With obvious consequences.

  • one
    Operator overload delete? - Qwertiy
  • @Qwertiy: Yes, valid use case, by the way. And it may look trivial. - VladD
  • ::delete ? And new too should be overloaded then? Maybe a separate answer to paint such a decision? - Qwertiy
  • @Qwertiy: No, I'm afraid to fix it, I'm not special on the crosses, I just stood next to it :-) // No, I meant delete for the object. - VladD
  • I meant that the overloaded delete must call ::delete , otherwise it will result in recursion. Or am I wrong? Here I am also afraid to fix, especially since I didn’t work with dll. - Qwertiy

Here it is, in my opinion, well covered:

  1. Semantics of symmetric ownership. If the DLL returns a pointer to a dynamically allocated chunk of memory, then the user must call the appropriate allocator for this.

Of course, if they do not match, there will be a problem - for this case, it is recommended to use functions for allocating and allocating allocated memory in the loaded library (dll-> free () or dll-> freeArray ())

  1. Flexibility. If the user wants to allocate memory elsewhere - not necessarily in the heap but somewhere else.

That is, in the end, passing the pointer to the memory in the DLL function with the memory already allocated will be better than allocating memory in the DLL itself.

  • Well, this is quite trivial, I, in general, about the same ... - Harry