Pipeline mode with Node + MapNode+ MapNode

Hello: nipype experts:
I want to ask a question about how to run my subjects in parallel, from this link, http://nipype.readthedocs.io/en/latest/users/mapnode_and_iterables.html
they use the node iterables to run in parallel, but in my case, there is a problem:

Here is my case:

I have 60 subjects, I want to run them for Freesurfer Tracula, which includes three steps in sequence(trac -prep, trac - bdep, trac-paths), So my code will look like this:

this is a mapnode to connect with the individual subject

tracula_prep = MapNode(interface=Function(input_names=['subject_ids', 'BIDS_dir', 'BIDS_id_nii', 'BIDS_id_bvec',
                                                       'BIDS_id_bval', 'BIDS_id_mfmap', 'BIDS_id_pfmap', 'template',
                                                       'output_dir', 'SUBJECTS_DIR'],
                                          output_names=['subject_id', 'config_file'],
                                          function=run_prep),
                       iterfield=['subject_ids', 'BIDS_id_nii', 'BIDS_id_bvec',
                                  'BIDS_id_bval', 'BIDS_id_mfmap', 'BIDS_id_pfmap'],
                       name='trac-prep')
tracula_prep.inputs.template = absolute_path(config_template)
tracula_prep.inputs.output_dir = output_dir
tracula_prep.inputs.BIDS_dir = BIDS_dir
tracula_prep.inputs.SUBJECTS_DIR = fs_reconalled_dir
tracula_bedp = MapNode(interface=Function(input_names=['subject_id', 'config_file', 'output_dir'],
                                          output_names=['subject_id', 'config_file'],
                                          function=run_bedp),
                       iterfield=['subject_id', 'config_file'],
                       name='trac-bedp')
tracula_bedp.inputs.output_dir = output_dir
tracula_path = MapNode(interface=Function(input_names=['subject_id', 'config_file', 'output_dir'],
                                           output_names=['subject_id'],
                                           function=run_path),
                       iterfield=['subject_id', 'config_file'],
                       name='trac-path')
tracula_path.inputs.output_dir = output_dir
dipy_tracker = MapNode(interface=Function(input_names=['subject_id', 'output_dir', 'recon'],
                                          output_names=['tensor_fa_file', 'tensor_evec_file', 'model_gfa_file', 'model_track_file'],
                                          function=dipy_recon),
                       iterfield=['subject_id'],
                       name='dipy_tracker')
dipy_tracker.inputs.recon = 'csd'
dipy_tracker.inputs.output_dir = output_dir

So I use tmux to run them on my cluster, for the first step, it seems run in parallel, but when the first step finished, the second step doesnt run in parallel, there is always just one core is ‘R’, the rests are ‘S’…

I know, in the link, you use iterables for Node, but if I put more than one vars in iterables, the parameters will get interIntertwined, every subject will not be independent…(

Do you have any good idea??

Hi @Junhao_Wen,
It seems like your workflow is made to run without iterables - all of your nodes are MapNodes iterating through your subject list. I would suggest just passing a list of all your subjects to trac-prep.