Help, please, enable multiprocessing in Jupyter, threading- for obvious reasons, it does not really help me, just, for example, I will leave it.

The second example in my mind had to speed everything up 4 times, since there are 4 physical cores on my laptop, but something went wrong, sort of, by analogy with threading, but it does not work.

import threading import time a = [] def func(arg): time.sleep(1) a.append(arg) for file in range(100): threading.Thread(target=func, args=([1])).start() time.sleep(2) print(a) 

after 2 seconds, everything works.

[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 , 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 , 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 , 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ]

 from multiprocessing import Process import time a = [] def func(arg): time.sleep(1) a.append(arg) for file in range(4): Process(target=func, args=([1])).start() time.sleep(2) print(a) 

[]

1 answer 1

This is how it will work, from the minuses, it will be necessary to specify the import of all used modules within your function, since there will be no access to the space from which the launch took place, in fact it launches several interpreters to the series and uses the multi-threading of windows, this is evident from the processes.

func - your function, which should include everything that will be used

args - function arguments, each for each run.

example:

  args = ['xx1.com', 'xx2.com'] 

after a pool two interfaces which will be on two different flows physical will be started. kernels and 2 functions will do something at these addresses in parallel))

  from multiprocess import Pool #! pip install multiprocess with Pool() as pool: peaks_rates = pool.map(func, args)