I read about the LOCK # command for an x86 processor that blocks the memory bus for the duration of the next command. There was such a question, as far as I know, for the implementation of the process locks, the following scheme is used: put a lock on changing a memory address of some (I don’t really understand how it happens in more detail), make changes, remove a lock.

The questions are as follows:

  • What happens when a process / thread says it wants to block a block of addresses, for example 1024-1040? Is some bit set somewhere or what? (Considered a multi-core system and some kind of common piece of memory)
  • What will happen if the process has locked the memory change, and then it has fallen off? Does the OS remove the lok or what?

UPD: That is, in my understanding, once we can atomically change the value from 0 to 1, then this allows us somewhere to keep the information that process 1 has entered the critical area, but it needs after that, let's say, to perform 100,500 operations and after This is to get out of the critical area, but where is the information stored in which memory addresses is it going to change (its critical area) as a result of these 100,500 operations? Or stupidly just every team with LOCK # performed before the release?

If everything at the top sounds like nonsense, please explain how it actually works. :)

  • 3
    They called themselves Ellochka - play at least a female sex :) - VladD
  • For those who wish to answer - the main thing here is: "объясните, пожалуйста, как это работает на самом деле" . "объясните, пожалуйста, как это работает на самом деле" - avp
  • @avp, if you know the answer, feel free to speak out :) - Ellochka Cannibal
  • Are you interested in locking inside the kernel (at the lowest level it depends a lot on the architecture) or at the user level? In any case, the best answer is sorsah. - avp
  • @avp, yes, just inside the kernel, I have enough of an example for a single architecture, for example, the memory bus is blocked there (although, as I was told, this is already the last century), a bit is set in some kernel structure that the shared memory page or the area is locked and when you try to write something into it by another process, it pauses and is added to the waiting list, somehow I imagine it, but I would like a little more :) - Ellochka Cannibal

2 answers 2

UPD: That is, in my understanding, once we can atomically change the value from 0 to 1, then this allows us somewhere to keep the information that process 1 has entered the critical area, but it needs after that, let's say, to perform 100,500 operations and after This is to leave the critical area, but where is the information stored in which memory addresses is it going to change (its critical area) as a result of these 100,500 operations? Or stupidly just every team with LOCK # performed before the release?

There is no need to store memory addresses somewhere, it is the programmer’s concern to use the elements of synchronization of threads / processes while accessing the shared memory area. If the program logic implies several such areas, each provides its own synchronization element (s).

UPD: Performing the LOCK prefix together with the operation (for example) BTS - Checking and setting the bit , with some reservations, allows you to create the simplest element of interprocessor synchronization.

  • the question is exactly how the synchronization primitive works at the код примитива - операция на процессоре (ядрах) - внутренние структуры операционной системы level код примитива - операция на процессоре (ядрах) - внутренние структуры операционной системы example: 2 processes on different cores, each trying to change a common piece of memory; what, where, where, when? :) thank! - Ellochka Cannibal
  • in short, how did it fix the operating system under its hood? teams, where they store, the interaction of nuclei, etc.) - Ellochka Cannibal
  • @EllochkaCannibal, in the sense of memory management inside the kernel, we are always talking about pages. Descriptors of physical pages are locked (there is such a data structure in the kernel) involved in some operation. And, of course, there is a binding to the process, and upon exit they are unlocked along with the release of other resources assigned to the process. - avp

Nothing is stored anywhere. If one thread accesses the memory while in a critical area, the second thread can access the same memory area outside the critical area.

By the way, the very concept of "critical section" means an unprotected area of ​​memory - but a section of the program in which two threads cannot be located simultaneously.