Nipype runs into error when run with MultiProc plugin

Hi,

Could you please help me resolve this issue with parallel processing of the Nipype workflow? I am not able to figure out what is going wrong when I run the workflow with the MultiProc plugin and why does it work fine when running with the Linear plugin. It starts throwing errors from the right from the beginning of the execution and crashes within seconds. More specifically, the execution tries to open result files which have not been created so far. Here are the tracebacks:

Traceback (most recent call last):
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 274, in _send_procs_to_workers
num_subnodes = self.procs[jobid].num_subnodes()
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py”, line 1234, in num_subnodes
self._get_inputs()
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py”, line 1250, in _get_inputs
super(MapNode, self)._get_inputs()
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py”, line 450, in _get_inputs
results = loadpkl(results_file)
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/utils/filemanip.py”, line 576, in loadpkl
pkl_file = gzip.open(infile, ‘rb’)
File “/home/fmri/Desktop/anaconda2/lib/python2.7/gzip.py”, line 34, in open
return GzipFile(filename, mode, compresslevel)
File “/home/fmri/Desktop/anaconda2/lib/python2.7/gzip.py”, line 94, in init
fileobj = self.myfileobj = builtin.open(filename, mode or ‘rb’)
IOError: [Errno 2] No such file or directory: u’/home/fmri/Desktop/pripeLine/output/working_dir/preproc/_subject_id_S16/convert_xfm/result_convert_xfm.pklz’

Traceback (most recent call last):
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 301, in _send_procs_to_workers
jobid].hash_exists()
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py”, line 244, in hash_exists
hashed_inputs, hashvalue = self._get_hashval()
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py”, line 407, in _get_hashval
self._get_inputs()
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py”, line 450, in _get_inputs
results = loadpkl(results_file)
File “/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/utils/filemanip.py”, line 576, in loadpkl
pkl_file = gzip.open(infile, ‘rb’)
File “/home/fmri/Desktop/anaconda2/lib/python2.7/gzip.py”, line 34, in open
return GzipFile(filename, mode, compresslevel)
File “/home/fmri/Desktop/anaconda2/lib/python2.7/gzip.py”, line 94, in init
fileobj = self.myfileobj = builtin.open(filename, mode or ‘rb’)
IOError: [Errno 2] No such file or directory: u’/home/fmri/Desktop/pipeLine/output/working_dir/preproc/_subject_id_S18/bet/result_bet.pklz’


RuntimeError Traceback (most recent call last)
in ()
260 #Run
261 try:
–> 262 preproc.run(plugin=‘MultiProc’, plugin_args={‘n_procs’: 4})
263 preproc.config[‘execution’] = {‘stop_on_first_crash’: ‘False’}

/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.pyc in run(self, plugin, plugin_args, updatehash)
588 if str2bool(self.config[‘execution’][‘create_report’]):
589 self._write_report_info(self.base_dir, self.name, execgraph)
–> 590 runner.run(execgraph, updatehash=updatehash, config=self.config)
591 datestr = datetime.utcnow().strftime(’%Y%m%dT%H%M%S’)
592 if str2bool(self.config[‘execution’][‘write_provenance’]):

/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/plugins/base.pyc in run(self, graph, config, updatehash)
277
278 self._remove_node_dirs()
–> 279 report_nodes_not_run(notrun)
280
281 # close any open resources

/home/fmri/Desktop/anaconda2/lib/python2.7/site-packages/nipype/pipeline/plugins/base.pyc in report_nodes_not_run(notrun)
99 logger.debug(subnode._id)
100 logger.info("***********************************")
–> 101 raise RuntimeError(('Workflow did not execute cleanly. ’
102 ‘Check log for details’))
103

RuntimeError: Workflow did not execute cleanly. Check log for details

If you look for the “non-existant” files after the crash, do they exist?