There is a function F , to the input of which 2 files are given. It processes them and saves these 2 files already changed to disk, instead of old ones.

The number of CPU cores is known to be N

There is a list of files in the amount of M , of which an array of pairs was obtained using the number of combinations of M through 2 . This array is divided into successive tasks, each of which contains N disjoint pairs.

How in Python to organize a call to this function F for all N cores of the CPU, provided that each such multicore call is with disjoint pairs of files (this is already implemented as a prepared array of tasks). After processing, the function saves these 2*N files, and is launched with the following files.

That is, how to organize вызов функции для N ядер -> ожидание завершения работы функции вызов функции для N ядер ожидание завершения работы функции -> the следующий вызов ?

It is necessary to implement this without using any complex structures or modules, such as multi-threaded classes or whatever they are.

    1 answer 1

    The solution to the problem:

     from multiprocessing.dummy import Pool def loop(s): for i in range(10): print i,s x = [] x.append(['1.txt','3.txt']) x.append(['2.txt','4.txt']) .... results = Pool().map(loop,x) pass 

    Fixed, checked, but only 1 CPU still works.

    • 2
      In CPython, only one thread can execute Python code at any time (therefore, only one CPU will be loaded with Python code). Several threads can work simultaneously only if other GIL threads are released (waiting for input / output from the system, perform calculations in the C code, without touching the Python interpreter). Look in the direction of the process pool, for example from the module's concurrent.futures . - jfs
    • @jfs yes you are right - Denis Leonov
    • Here is an example of the code using the thread pool (to use processes, you need to change the imports ) ru.stackoverflow.com/q/705237/23044 - jfs
    • It seems to me that it will be easier to make multiple calls to the Python function with parameters, wait for the work to finish and call again with new parameters, for example, from a bat or sh file (depending on the OS). Just call 8 instances at once and you’ll get 8 parallel processes - Denis Leonov
    • 2
      why reinvent the wheel, the process pool for you will wait for the end of the function, will make new challenges when resources appear. The bottom line comes down to results = Pool().map(function, many_args) - jfs results = Pool().map(function, many_args)