As far as I understand, the processes in the computer are not completely parallel, but in fact quickly switch between themselves.
In modern multi-tasking operating systems, this is true, but in general it depends on the operating system.
But in this case, the time to complete all the processes still remains the same as if the processes were running consistently.
Yes, if the entire process payload is executed on the processor. And this is not always the case. A variety of I / O leads to idle CPU. If during this idle time to execute the code of another process, you can save time.
True, it does not take into account the cost of time on the so-called. context switch ("context switching", the transition control from process to process), so that pure CPU tasks with "parallelization" will be even slower.
Do I understand correctly that the main feature of this kind of "parallelism" is that we can not wait until a certain process is over and stop the other by stopping the previous ones?
Yes, I described it above about it :) Other processes may not be stopped by “force”. They can stop themselves waiting for a result from the OS kernel.
What other advantages to such a principle of "parallelism"?
In general, this is not "parallelism" as such, but I have not heard a good translation of the correct term, concurrency. "Competitiveness" is that.
Concurrency takes place when the code is structured not for sequential execution, but contains certain "ramifications" and "connections", different branches of which can (potentially) be executed in parallel. However, no obligation. And the mechanism described by you just allows the "competitive" code to get along on one processor.
Quite often, competitive code is used within one process (in different [thread] threads ), in which some user interface (UI) is supported, and some workload starts from it. They are executed competitively so that the interface can be used even when the program is busy with some kind of work.
However, this "advantage" is built more on the way the definition of "process" is used.
For example, in Linux, the differences between the "process" and "thread" are minimal: they all have their own "process ID", although they are not always visible from the outside, since the enumerators of the processes try not to show individual threads as superfluous.
But the same principle occurs in a somewhat unexpected form: interpreters of some languages, in particular Ruby (MRI), JS (V8) and Python (CPython), are implemented using the Global Interpreter Lock, and because of this, the interpreter can only execute one code flow. This greatly simplifies the implementation of the interpreter and the code interacting with it (extensions to C, for example), although it places additional restrictions.
In this case, if the principle of “parallelism” is true, then shouldn't a lag in a computer clock accumulate?
No, it's all right there, a separate device ( RTC with battery) is usually involved in maintaining the clock, and not a process.
Otherwise, how would the clock go when the computer is turned off? However, it is easy to find out. For example, on all models of Raspberry Pi, the watch has to be re-set every time it is turned on. It supports clock when the device is on, but there is no separate power source for them. An ordinary desktop computer will behave in a similar way if you take out the battery from the motherboard.