Hello! A MaybeEncodingError error occurs during parallelization.

I do the following:

trained_models = [] for i in range(len(l) - 1): tasks = [joblib.delayed(model)() for model in models[l[i]:l[i+1]]] trained_models.append(joblib.Parallel(n_jobs=n_cpu)(tasks))

Here is the model: functools.partial(<function train_model at 0x7f89337e7d90>, functools.partial(<class 'models.monolayer_rnn.MonolayerRNN'>, n_features=19, n_instruments=13, n_gru0=30))

It returns the object models.monolayer_rnn.MonolayerRNN object at 0x7f893382d0b8 .

Full error code:

 MaybeEncodingError Traceback (most recent call last) <ipython-input-29-c74bd2fce434> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', "l = list(range(0, len(models), n_cpu))\nl.append(len(models))\ntrained_models = []\nfor i in range(len(l) - 1):\n tasks = [joblib.delayed(model)() for model in models[l[i]:l[i+1]]]\n trained_models.append(joblib.Parallel(n_jobs=n_cpu)(tasks))\n print(l[i+1], 'done of ', l[-1])") /root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell) 2291 magic_arg_s = self.var_expand(line, stack_depth) 2292 with self.builtin_trap: -> 2293 result = fn(magic_arg_s, cell) 2294 return result 2295 /root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/IPython/core/magics/execution.py in time(self, line, cell, local_ns) /root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k) 191 # but it's overkill for just that one bit of state. 192 def magic_deco(arg): --> 193 call = lambda f, *a, **k: f(*a, **k) 194 195 if callable(arg): /root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/IPython/core/magics/execution.py in time(self, line, cell, local_ns) 1165 else: 1166 st = clock2() -> 1167 exec(code, glob, local_ns) 1168 end = clock2() 1169 out = None <timed exec> in <module>() /root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable) 664 # consumption. 665 self._iterating = False --> 666 self.retrieve() 667 # Make sure that we get a last message telling us we are done 668 elapsed_time = time.time() - self._start_time /root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/sklearn/externals/joblib/parallel.py in retrieve(self) 516 self._lock.release() 517 try: --> 518 self._output.append(job.get()) 519 except tuple(self.exceptions) as exception: 520 try: /root/miniconda/envs/jupyterhub_py3/lib/python3.4/multiprocessing/pool.py in get(self, timeout) 597 return self._value 598 else: --> 599 raise self._value 600 601 def _set(self, i, obj): MaybeEncodingError: Error sending result: '<models.monolayer_rnn.MonolayerRNN object at 0x7f893382d0b8>'. Reason: 'RuntimeError('maximum recursion depth exceeded while pickling an object',)' 

How can I solve the problem without changing the return value of the model?

  • The real error is maximum recursion depth exceeded while pickling an object . And what is not satisfied with the standard multiprocessing , which had to drag some kind of incomprehensible thirst? - m9_psy
  • Google error on the Internet, with the standard multiprocessing , the same error occurs. - Tolkachev Ivan
  • one
    @TolkachevIvan: to transfer objects between processes, they are serialized into bytes using pickle (in multiprocessing and apparently joblib ). pickle NOT dependent on multiprocessing or joblib . To reproduce the error, execute pickle.dumps() on the specified object of the type models.monolayer_rnn.MonolayerRNN (find out if "RuntimeError (maximum recursion depth exceeded)" is a bug in the __getstate__() method or just a deeply sys.setrecursionlimit() object — increase the sys.setrecursionlimit() in the latter case) - jfs

0