There was a question regarding the operation of RAM and its allocation to applications.
There is a JS program that uses a large number of typed arrays. There are two options - create each array separately (each weighs 2048 bytes), then the memory will be requested from the system in blocks of 2048 bytes. Or, first create an ArrayBuffer object (just a ArrayBuffer memory) for 100 MB, and then create arrays based on it (in fact, they will be like pointers in C ++, created by specifying a buffer, offset from the beginning and length).
The test showed that in the second case, the speed of creating arrays is about 2 times more.
However, I have a lot of RAM on my computer, and the question is - if the system has 100 MB, but the memory is fragmented, then will it take time to defragment? In general, can it be that in a loaded system, allocating memory in blocks of 2048 bytes will be more efficient than reserving 100 MB at once?