How do the heap and stack physically look in RAM?

  • and access in the heap is slower than in the stack. Or does it occur at the software and not the hardware level? - voipp
  • 3
    Maybe the author is studying the issue of virtual memory - yes, the heap and stack may be on the hard disk if the operating system takes them there. Another question is that the stack in this case is a certain segment (memory area) of the program, and the heap will be in the data segment (with RW attributes), as opposed to the code segment (which is RO). In general, it is strange to talk about such things - you need to separate flies from cutlets and do not interfere with the warm ones with soft ones. - gecube
  • five
    “Physically” these are transistors and capacitors - DreamChild
  • one
    @voipp, why did you decide that the stack is fast and the heap is slow ? As far as I know, the memory access time in them is the same . - avp
  • 3
    @voipp: I don’t think that the information you read is true. As @DreamChild correctly said, both the stack and the hip are nothing more than hardware. The fact that a stack is smaller than a hip in a typical case does not mean anything: you can access the data locally in the heap and non-locally on the stack. (You meant processor-level caching, but it doesn’t work the way you think.) There are no registers in high-level languages ​​and there will never be for many reasons (for example, because the optimizing compiler can optimize better than humans.) - VladD

3 answers 3

  1. Both the stack and the heap are both physically located in RAM (we do not consider architectural dislocations using special processors / computers)
  2. Their size and location are determined by the axis.
  3. In this case, the heap can be fragmented (sometimes quite strongly). Usually, axes have special procedures for heap defragmentation.
  4. The stack is usually never fragmented (you can probably come up with a stack implementation with fragmentation, but this is an oxymoron).
  5. The stack, as it were, is faster because it has only one parameter that it works with - it is the stack position indicator (usually a register) - therefore all operations with the stack work many times faster than with the heap. The operation of extracting / writing from the stack is 1 POP/PUSH
  6. With a bunch more difficult precisely because of its fragmentation and a simple operation of extracting values ​​from it can result in dozens (if not hundreds) of processor gestures.
  7. The cons of the stack are in its small size (it is always an order of magnitude smaller compared to the heap) - well, and that access to it is only consistent.
  • five
    @Barmaley: 6) It makes sense to allocate memory on the heap, but if you have a pointer to an object on the heap, and a pointer to an object on the stack, the access speed is strictly the same. 7) Again, access is not by sequential reading, but by dereferencing the pointer. - VladD
  • four
    Indeed, the speed of access to the data in the stack and the heap is the same. Those. Clauses 5) and 6) are a mistake. I was not lazy and checked the time of filling (several attempts) of an array of 2 million int (I do not fit into the stack anymore) on the heap and stack. void fill (int a [], int n) {srand (0); for (int i = 0; i <n; i ++) a [rand ()% n] = rand (); } This method is chosen to minimize the impact of the cache and the prefetch of data into it. Results (clock_gettime (CLOCK_THREAD_CPUTIME_ID, & ts);) ./a.out 100 stack: avg: 43.332 (msec) heap: avg: 42.283 (msec) - avp
  • @avp should not be tried that way, it is necessary to ensure that the heap is fragmented - for this purpose it is necessary to make a cloud of random allocations of different sizes and also randomly delete them, only after that check the access speed to the large (!) array in the heap. Otherwise, if the heap is not fragmented, the speed will of course be the same. - Barmaley
  • @Barmaley ♦, in any case, the entire array in the heap will be in consecutive cells (of course, in the virtual address space). Or are you considering a theoretically possible situation in cc-NUMA when the OS allocates a process memory (mmap-ohm?) From the physical memory of another node? “Unfortunately, firstly, now I don’t have access to such a system, well, and secondly, I don’t have much idea how to model such a consistently repeated situation in cc-NUMA. - avp
  • 2
    @Barmaley ♦, and physical memory can always be fragmented, that in the heap, that in the stack, that in the code. But here you are mistaken, this does not affect the access speed (in normal, but not NUMA architectures). - avp

Although it has been a long time since you asked a question, I would like to answer, because the question is still “googling”, and I think that many more will visit this page.

As you have already answered "physically" - these are transistors and capacitors. So the question itself was most likely not entirely correct. You probably meant something like - "Where is the heap and stack, how are they arranged."

Also, probably, you are interested in how the memory is arranged in general, I will not be able to fully clarify this question, but I want to show you the material that is exhaustive in my opinion and give links to them. Materials in Russian. You need to understand what virtual memory is and how it is translated into physical memory .

Here is an example of the organization of segmental memory.

image taken from article http://jpauli.imtqy.com/2015/04/16/segmentation-fault.html

The bottom line is that the address representation in "virtual memory" is different from the physical one. But for the process (program) it is not important, because it does not allocate memory (allocation / release) for it. This is done by various operating systems of the OS and MMU.

For a deeper understanding, you need to familiarize yourself with the concept of "virtual memory", as well as with the methods of its organization "Segmental memory addressing", "Page memory"

This is a brief and a bit about the presentation of virtual memory in the physical.

About the memory device inside the process:

inside the process

Again, briefly, the process thinks that it is allocated a solid space in which the areas are already located - “code”, “stack”, “heap”.
The stack is filled and grows when you call new methods in the free area in the direction of reducing addresses. The heap grows the other way around in the direction of increasing addresses. To ensure that the memory is enough again, the OS is watching.

What is stored in the CODE block, as well as many other things, you can learn from the materials recommended by me in this answer.

I recommend reading in the order I have arranged, the articles I have suggested for familiarization.

  1. Memory device basics

  2. Famous article "What Every Programmer Should Know About Memory"

  3. The organization of virtual memory. Page tables and more.

  4. Convert logical address to linear.

  5. MMU device

  6. List item

I do not quite understand the "rating answers" to your question, because people have gone into them, in my incompetent view, into a discussion of other topics.

    Better say so. Typical operations when working with memory are its allocation / release and reading / writing. Memory allocation and freeing operations work more slowly on the heap.

    Data access (read / write) occurs at almost the same speed. Practically - because the typical access to the values ​​in the stack is direct, and in the heap, via the pointer. Directly actually also means something in the spirit of ptr [esp + 10h], i.e. to get the data, the processor needs to add some number to the value of the esp register and access the memory at this address. If (typical situation) a pointer to a fragment of memory in the heap lies on the stack, then there are two levels of indirection - you first need to get the value of the pointer through "ptr [esp + 10h]", and then read the value at this address.

    If we, for example, start a pointer to the data lying in the stack, then there will be absolutely no difference in access speed.

    • "Data access (read / write) occurs at almost the same speed." And if the data is fragmented, won't the reading slow down? - voipp
    • 2
      @voipp: What does “data fragmented” mean? If you are talking about fragmentation of the array, then this does not happen, the array always goes in a row. If you are talking about accessing different objects, they can be distributed in memory and at their location in the stack, and in the heap. - VladD
    • 2
      Yes, I support @VladD. The structure also cannot be fragmented. If you have a structure with std :: string on the stack, then the contents of the string, usually, are already in the heap :). Rather, you need to think about the data structures that should be used, rather than their location. The list, for example, can sometimes be better than an array, but it is prone to fragmentation (and spends more memory) ... - Michael M