As far as I understand, the processes in the computer are not completely parallel, but in fact quickly switch between themselves. But in this case, the time to complete all the processes still remains the same as if the processes were running consistently. Do I understand correctly that the main feature of this kind of "parallelism" is that we can not wait until a certain process is over and stop the other by stopping the previous ones? What other advantages to such a principle of "parallelism"?

In this case, if the principle of “parallelism” is true, then shouldn't a lag in a computer clock accumulate?

  • computer clocks work without a processor, because there is a clock generator for this - Dmitry
  • Ok, with the clock understood. And, for example, if we open two videos at once? Why do they work correctly? - Hashirama
  • pseudomnazozadachnost allows you to perform several tasks at the same time imperceptibly to your eyes :) but if it is serious, this is not explained in a few words. - Dmitry
  • Was stealth to my eye a joke? - Hashirama
  • No, a brief description of the speed of the processes :) - Dmitry

2 answers 2

The answer depends on what your processes or threads do (let us, for simplicity, continue to talk about threads that are originally called thread). Threads will run at the same time if you have more than one kernel.

But in the question, as I understand it, I am interested in the situation when the core is one. In this case, some threads can still run in parallel. For example, while the hard disk reads data or the network card receives packets, the operating system can switch to another stream, and the gain will be noticeable in such tasks. If you sent some data to a sound card, you can still do other things until it plays the received part of the sound.

If you have several threads that are mainly occupied by calculations that occupy the processor core, then in this case there will be no gain from parallelization, and there will even be a slowdown, since switching between threads is not a very cheap operation.

    As far as I understand, the processes in the computer are not completely parallel, but in fact quickly switch between themselves.

    In modern multi-tasking operating systems, this is true, but in general it depends on the operating system.

    But in this case, the time to complete all the processes still remains the same as if the processes were running consistently.

    Yes, if the entire process payload is executed on the processor. And this is not always the case. A variety of I / O leads to idle CPU. If during this idle time to execute the code of another process, you can save time.

    True, it does not take into account the cost of time on the so-called. context switch ("context switching", the transition control from process to process), so that pure CPU tasks with "parallelization" will be even slower.

    Do I understand correctly that the main feature of this kind of "parallelism" is that we can not wait until a certain process is over and stop the other by stopping the previous ones?

    Yes, I described it above about it :) Other processes may not be stopped by “force”. They can stop themselves waiting for a result from the OS kernel.

    What other advantages to such a principle of "parallelism"?

    In general, this is not "parallelism" as such, but I have not heard a good translation of the correct term, concurrency. "Competitiveness" is that.

    Concurrency takes place when the code is structured not for sequential execution, but contains certain "ramifications" and "connections", different branches of which can (potentially) be executed in parallel. However, no obligation. And the mechanism described by you just allows the "competitive" code to get along on one processor.

    Quite often, competitive code is used within one process (in different [thread] threads ), in which some user interface (UI) is supported, and some workload starts from it. They are executed competitively so that the interface can be used even when the program is busy with some kind of work.

    However, this "advantage" is built more on the way the definition of "process" is used.

    For example, in Linux, the differences between the "process" and "thread" are minimal: they all have their own "process ID", although they are not always visible from the outside, since the enumerators of the processes try not to show individual threads as superfluous.

    But the same principle occurs in a somewhat unexpected form: interpreters of some languages, in particular Ruby (MRI), JS (V8) and Python (CPython), are implemented using the Global Interpreter Lock, and because of this, the interpreter can only execute one code flow. This greatly simplifies the implementation of the interpreter and the code interacting with it (extensions to C, for example), although it places additional restrictions.

    In this case, if the principle of “parallelism” is true, then shouldn't a lag in a computer clock accumulate?

    No, it's all right there, a separate device ( RTC with battery) is usually involved in maintaining the clock, and not a process.

    Otherwise, how would the clock go when the computer is turned off? However, it is easy to find out. For example, on all models of Raspberry Pi, the watch has to be re-set every time it is turned on. It supports clock when the device is on, but there is no separate power source for them. An ordinary desktop computer will behave in a similar way if you take out the battery from the motherboard.