What is the objective reason for the release of memory in the program?
There are no objective reasons to free up memory. And with proper program design, it is possible not to free up memory, but to reuse it.
What is the objective reason for the release of memory in the program?
There are no objective reasons to free up memory. And with proper program design, it is possible not to free up memory, but to reuse it.
There is no reason to release all memory. Your goal - the program should run and not fall out of the memory overflow . Anything other than this is not so important.
This, however, means that if you allocate memory constantly, in a cycle, then sooner or later the memory will be exhausted and your program will fall. This means that the memory allocated in the cycle should be returned. (Well, or not to return, if you are not interested in the case of a long run of the program.)
Notice that global variables are essentially allocated and not returned back memory. Similarly, any singleton is a dedicated and non-reversed memory. If you allocate a memory one or a limited number of times, it is likely that you can not return it back.
Update: memory can be reused. This is especially simple in C (you allocate a pool of memory, and use your allocation functions for the rest of the program), in C ++ it is somewhat more complicated (you will need placement new). This will complicate your program, but it can make it more effective (a custom allocator is often faster than a runtime). At the same time, the amount of consumed memory does not grow, which means that there is no immediate threat of memory overflow.
However, pay attention to an important special case. If you write in a language in which freeing up memory also produces additional actions (that is, you have a destructor ), then it may be worth freeing the memory just for the sake of these additional actions. For example, if you open a file in a constructor and close a file in a destructor, then it makes sense to free up memory in order to let other programs (or other parts of your program) access this file.
There are other reasons why it may be necessary to free memory. For example, if a program takes up a lot of address space, then if it is pushed out of RAM into the paging file and backward, more time will be required, and it will work slower from the client’s point of view. But this is a different order reasons.
Yes, and in languages ​​with the garbage collector most of the memory is freed up automatically, you don’t even need to think about it.
In my opinion, objectivity lies in the desire to ensure that the implementation of any program was as simple as possible, since there are actually few other difficulties that somehow arise in the creative process.
For example, you can take at least smart pointers. Their popularity is due to the objective unwillingness to "manually" produce the correct release of memory. To err because of carelessness is peculiar to a person, which means that the code that implements a particularly sophisticated approach, in addition to being written much later, rather than its “smart” counterpart, will also require additional time for testing.
If the project is something that does not have a specific deadline on the road of implementation and / or does not affect the financial well-being of the developer, then the latter can afford to move away from the objective need to complete its work in the shortest possible time.
Local allocation and release of memory is a consequence of an objective desire not to bother and not to waste time on trifles, which in the overwhelming majority of tasks are such. Using the global buffer and complex pointer management to keep the average product running will not make the weather average if the whole task is reduced to a dialogue with a user or another asynchronous process where even the adequacy of using multithreading can cause certain questions.
The desire for complexity begins to lose support at that moment, as soon as the relative loss of performance, for example, from using the same smart pointers or simply new - delete bundles, becomes less noticeable than the completely objective and sometimes quite noticeable amount of man-hours spent on development. functional without them.
In conclusion, it can be concluded that the objective reason for using local control over memory allocation is quite the banal practicality, which in some cases is born out of fear from the quite predictable abuse of the boss over the body of a slow programmer.
On the issue of reusing the already allocated memory - here comes the creepy word Fragmentation of the address space.
Suppose a program grabs a large block of memory and allocates and releases smaller blocks in it. In such a case, a situation of memory fragmentation is possible (allocated blocks 0 1 2 [scoring all memory], then block 0 and 2 were deleted).
After that, if you need to allocate a block, just like the size of three previously existing ones, without defragmentation (which is extremely difficult to do), further work of the program will be impossible.
In the case of work through the OS, it is possible to glue together various fragments of physical memory using MMU tools. An application program does not have this capability.
The program should work correctly. You must be sure that the program is working properly. This is a complex thing, if you missed one moment, then there is an error in the program, and until you put everything in order, the quality of the program will be low. For this purpose, unit tests, static code checks, logging and some other things are used. As long as you do not control the entire memory, you are not immune to errors. A memory leak alone may not be a mistake, but it is the beginning of a path to problems. If you did not write the code to release the memory immediately, you run the risk of never writing it, and you can catch a mistake in a large project for months, or you may not catch it at all. There used to be a practice immediately after calling delete equate a pointer to zero, and then you can forget about it. Now there are, for example, smart pointers with auto-deletion. When you reuse memory (even after freeing), you can step on a rake in the form of the old values ​​that remain (and you will really attack them when the number of fields exceeds a hundred — not to follow everything). When there is a need for variables on the stack, it is better to create a new one, enough memory. Suppose there is a long method that does not fit on the screen (it can take two screens or even three). You allocate memory in the first paragraph, use it, and want to reuse it much lower. Then you should study all the code above to find out that:
It will be hard and unprofitable. It is easier to remove the old memory, annihilate it, and select a new pointer with a new name. If you do not write for the server, you may not be damaged in the form of kilobytes for the time being, although you still need to look for them. Anyway, your program will not work for more than a few hours (or the user closes it and goes away on business, or it falls due to your mistake, it may also fall through no fault of yours).
In programs created using programming languages ​​with the possibility of explicit allocation / release of memory (C, C ++, etc). There are two approaches to working with memory.
1 Memory Allocation and Reuse
error "memory leak"
it is necessary to monitor the fragmentation of memory in the program
2 Allocation and release of memory as needed.
possible memory leak error
OS keeps track of memory fragmentation
The complexity of the algorithm, the complexity of the implementation, "the abuse of the authorities over the body of a slow programmer" is more subjective moments.
Re-use implies having your own memory manager, with its own alloc and free. Sense to make an identical wrapper around system mechanisms? Only if there will definitely be a performance gain. It’s not a fact that what you mean by “release” is really that very release.
Source: https://ru.stackoverflow.com/questions/542141/
All Articles