Normalize Node won't save multiple files

Admittedly I’m completely new to Nypype and am just playing around for the time being but I have run the following:
###Create nodes

Gunzip - unzip functional

gunzip = Node(Gunzip(), name=“gunzip”)

#Brain extraction with FSL BET
#When making a node the input/output files aren’t specified. That will be specified in the workflow
bet = Node(BET(mask=True, output_type=‘NIFTI’), name=‘BET’)

#SliceTiming with SPM
interleaved_order = range(1,number_of_slices+1,2) + range(2,number_of_slices+1,2)
sliceTiming = Node(SliceTiming(num_slices=number_of_slices,
time_repetition=TR,
time_acquisition=TR-TR/number_of_slices,
slice_order=interleaved_order,
ref_slice=2),
name=“sliceTiming”)

#Volume Registration with AFNI
volReg = Node(Volreg(oned_file=‘move_params’, oned_matrix_save=‘matrix_transforms’, outputtype=‘NIFTI’), name=‘volReg’)

#Coregister and normalize with SPM
normalize_func = Node(Normalize(jobtype=‘write’), name=“normalize_func”)

normalize_struc = Node(Normalize(jobtype=‘write’), name=“normalize_struc”)
#normalize_struc.inputs.jobtype = “write”

Specify Workflows & Connect Nodes

Create a preprocessing workflow

preproc = Workflow(name=‘preproc’)
preproc.base_dir = opj(experiment_dir, working_dir)

Connect all components of the preprocessing workflow

preproc.connect([(gunzip, bet, [(‘out_file’, ‘in_file’)]), (bet, sliceTiming, [(‘out_file’, ‘in_files’)]),
(sliceTiming, volReg, [(‘timecorrected_files’, ‘in_file’)]),
(volReg, normalize_func, [(‘out_file’, ‘apply_to_files’)]),])

Input & Output Stream

Infosource - a function free node to iterate over the list of subject names

infosource = Node(IdentityInterface(fields=[‘subject_id’,
‘session_id’]),
name=“infosource”)
infosource.iterables = [(‘subject_id’, subject_list),
(‘session_id’, session_list)]

SelectFiles

templates = {‘func’: ‘data/{subject_id}/{session_id}.nii.gz’}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir),
name=“selectfiles”)

Datasink

datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name=“datasink”)

Use the following DataSink output substitutions

substitutions = [(’_subject_id’, ‘’),
(‘session_id’, ‘’)]
datasink.inputs.substitutions = substitutions

Connect SelectFiles and DataSink to the workflow

preproc.connect([(infosource, selectfiles, [(‘subject_id’, ‘subject_id’),
(‘session_id’, ‘session_id’)]),
(selectfiles, gunzip, [(‘func’, ‘in_file’)]),
(volReg, datasink, [(‘out_file’, ‘volreg.@output’),
(‘oned_matrix_save’,
‘volreg.@parameters’),
]),
(normalize_func, datasink, [(‘normalized_files’, ‘normalized.@output’),
(‘normalization_parameters’, ‘normalized.@parameters’),
]),
])

I’m currently just trying to get different preprocessing streams to run. This one errors 6 lines above the one you are reading right now (‘normalization_parameters’, ‘normalized.@parameters’) with: IOError: Duplicate node name “infosource” found.

But I’m confused. You can save multiple files from the VolReg node and according to the interface page, Normalize outputs multiple files so why won’t this datasink format work here?

I’m sorry for what is probably a beginner question.

Are you running this workflow in jupyter notebook? Are you sure you didn’t run some of the cells multiple times (and generate the second infosource node) that you later tried to connect? If this might be the case, try to restart kernel and run everything again.