This is not the first time I've heard that the allocation speed on a heap in C # or Java is faster than in C ++ . I can't understand why this could be: heap allocation means searching for a free page in memory. How can this speed up and true rumors?

  • one
    "and whether rumors are true" - tests. "How can this be accelerated?" - to speed up, there is nothing. - DollarDollar
  • @DollarDollar, in other words, "what does a garbage collector do that allocate to the heap faster"? - xperious
  • in other words, solve only existing problems - DollarDollar

1 answer 1

Yes, these rumors are true and have a foundation.

The fact is that in languages ​​like C ++, allocation usually implies a search for a free block in the list of free blocks. In this case, if the program is multi-threaded, you may also need a global lock. After finding the free block, you must also correct the list of free blocks, and place additional information in the selected block.

In languages ​​like C #, allocations are much simpler. For a new element, the memory is allocated simply on top of the heap memory, and thus, allocating is reduced to a simple

Interlocked.Increment(heapPtr, requestedSize); 

which, of course, is much faster than searching through the list of free blocks.

What is the difference? The fact is that the garbage collector in .NET performs heap compaction in addition to the assembly itself: heap compaction: the holes in the heap that appear at the site of the collected garbage are filled with other objects, and all pointers to them are shifted. Example:

Allocated objects:

 [ объект №1 ] [ объект №2 ] [ о. №3 ] [ о. №4 ] [ о. №5 ] ^ | heapptr 

The first phase of garbage collection (collected objects number 1 and number 4):

  [ объект №2 ] [ о. №3 ] [ о. №5 ] ^ | heapptr 

Sealing:

 [ объект №2 ] [ о. №3 ] [ о. №5 ] ^ | heapptr 

As a result, runtime can allocate new objects always at the top of the heap, without fear that the holes in the heap will take up too much space.

In C ++, there is no reliable way to determine where the pointers to this object are located, so the heap-compaction trick is not possible.


However, C ++ (in some cases) may well speed up allocations at the expense of custom allocators, which can immediately request a large chunk of memory from runtime, and distribute it to objects.


In fact, garbage collection is more complicated than I described. Here , for example, there is an introduction to how garbage collection takes place in .NET.

  • 2
    compactification is a common translation compaction ? Or not? I would be inclined to translate "laying", "laying", or something similar. And then the "compactification" is not easy even to pronounce :) And I’ll give you a link with funny animations of different GCs . - D-side
  • @VladD, if only at the end to add, then, it turns out, the address is easy to get using just pointer arithmetic (add 1), this is comparable to the speed of allocation on the stack is obtained ... whereas in general java and sisharp can lose speed in terms of if work with memory is so fast - xperious
  • Interestingly, if the garbage collection does not use additional memory for sorting, then in general (so to speak, "per circle") which method is preferable? I understand that every memory allocation on request in Sharpe is fast, but the total number of processor cycles (with regard to periodic garbage collection)? - avp
  • @ D-side: Hm, maybe “packaging” is better? - VladD
  • @xperious: So it is - the speed of allocation of objects on the heap and on the stack in .NET is quite comparable. Loss in speed may be due to all sorts of checks: for example, in C #, when accessing an array element, the index is out of range, but not in C ++. Well, JIT compilation may not be as effective (although its effectiveness is constantly improving). - VladD