I'm trying to run a program (GCC compiler) that uses a lot of RAM in its work. But unfortunately, it crashes due to a lack of memory, when its process only occupied about 2.2 GiB of virtual memory (according to the Gnome system monitor). I heard that on 32-bit OS, up to 4 GiB can be available to a single process. But what is the reason for such a discrepancy, and is there any way around this limitation? Why the process can not take the memory to the maximum?

This is what I get at the end of GCC.

virtual memory exhausted: Cannot allocate memory 

But the output of the command ulimit -a .

 core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 

I run the program in a chroot cell with a 32-bit environment on a machine with 6 GB of RAM. I need this to test the build for the i386 architecture.

  • And how much physical memory is available and already taken? - 0xdb
  • @ 0xdb, at the time of the interruption of compilation, approximately 3.3 GiB of 6 GiB is occupied. Plus there is another 10 GiB for swap. - mymedia

1 answer 1

In the standard 32-bit x86 smp kernel, each process can use 3 GB of address space 4 GB and 1 GB is used by the kernel (shared in the address space of each process).

With a 32-bit x86 core with 4G / 4G partitioning hugemem, each process can use (almost) just 4 GB of address space, and the core has a separate 4 GB address space. This core was supported by Red Hat in RHEL 3 and 4, but they dropped it in RHEL 5, because the patch was not accepted into the main core, and most people now use 64-bit kernels.

With the x86_64 64-bit kernel, a 32-bit process can use all 4 GB of address space, with the exception of a couple of pages (8 KB) at the end of the 4 GB address space managed by the kernel. The kernel uses a part of the address space that goes beyond the 4 GB available for 32-bit code, so it does not reduce the user address space. A 64-bit process can use much more address space (128 TB in RHEL 6).

Note that part of the address space will be used by program code, libraries, and the stack, so you cannot malloc () your entire address space. The size of these things depends on the program. Take a look at / proc / <pid> / maps to see how the address space is used in your process; the amount you can malloc () will be limited to the largest unused address range.

  • And indeed, there is a rather large unused piece between 0x0b75d000 and 0x55598000 - paste.ubuntu.com/26201187 Can I somehow take it? Say, somehow move all these shared libraries ... - mymedia