Nipype: SPM realignment multiple sessions

Hello everyone,

I’m currently setting up my first nipype script and would like to ask a question wrt file selection and flow:

I have a data set with 2 functional runs per participant and would like to realign them together (corresponding to two sessions within the realignment module in a classic SPM batch), resulting in one mean image. When running the node in my current set up, using IdentityInterface and SelectFiles to get and pass the run corresponding files to the realign node like this:

infosource = Node(IdentityInterface(fields=['subject_id',
infosource.iterables = [('subject_id', subject_list),
                        ('session_id', session_list)]
templates = {'func': 'fMRI/{subject_id}/{session_id}/func_*.nii',
selectfiles = Node(SelectFiles(templates,

mypipeline.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
                                               ('session_id', 'session_id')]),
                    (selectfiles, realign, [('func', 'in_files')]),

realignment is done run wise and not across runs, also resulting in two mean images.
This continues of course throughout my pipeline (e.g. coregistration, etc.).

I guess this happens, because I use Iterables for both, participants and sessions, whereas I
should probably use a MapNode/ Iterfield for the sessions.
Could someone give me a hint / or has an idea on how to exactly do that, as I’m not quite sure!?

Regards, Peer


I also have a similar problem. I’ve got 6 runs and want to avoid running coregistration for each run separately. Is there a more efficient way to do this?



Ahoi hoi @Sebastian,

in the end it kinda depends on your dataset and it’s structure, as well as the preprocessing and subsequent analyses steps you’ve planed.
But for now, assuming you have a dataset (of course in BIDS) that contains 6 functional runs of the same task for a bunch of participants and you want to apply realignment (via SPM) and coregistration (via FreeSurfer) something like the following should work:

import nipype.interfaces.freesurfer as fs  
import nipype.interfaces.spm as spm  
import nipype.interfaces.utility as util 
import as io   
import nipype.algorithms.misc as mc  
import nipype.pipeline.engine as eng  

# Gunzip node - unzip functional images, as SPM can't read .gz
gunzip = eng.MapNode(mc.Gunzip(), name="gunzip", iterfield=['in_file'])

# realign node - register functional images to the mean functional
realign = eng.Node(spm.Realign(register_to_mean=True),

# coregistration node - coregister the mean functional to the anatomical image
bbregister = eng.Node(fs.BBRegister(init='spm',

# Create a preprocessing workflow
preproc = eng.Workflow(name='preproc')
preproc.base_dir = opj(experiment_dir, working_dir)

# Connect all components of the preprocessing workflow  
preproc_masks.connect([(gunzip, realign, [('out_file', 'in_files')]), 
                       (realign, bbregister, [('mean_image', 'source_file')]),]) 

# Infosource - a function free node to iterate over the list of subject names
infosource = eng.Node(util.IdentityInterface(fields=['subject_id']),
infosource.iterables = [('subject_id', subject_list)]

# SelectFiles - to grab the data 
templates = {'func': 'bids_dataset/{subject_id}/func/task-test_run-*_bold.nii.gz'}
selectfiles = eng.Node(io.SelectFiles(templates,

# connect Infosource and SelectFiles to the preprocessing workflow
preproc.connect([(infosource, selectfiles, [('subject_id', 'subject_id')]),
                 (infosource, bbregister, [('subject_id', 'subject_id')]),
                 (selectfiles, gunzip, [('func', 'in_file')]),])

(This of course doesn’t include necessary paths like the experiment, working and FreeSurfer directory, or the subject list.)

Based on that you can continue by e.g. coregister & transform the functional runs to a certain reference space via ANTs.

HTH, best, Peer

1 Like


Thanks Peer!
So if I understand the code correctly you select all functional runs at once and realign them to the mean image across runs. Then the mean image across runs is registered to the t1. Is that correct?

So far I’ve used ‘task_id’ (functional run) as iterable as well and done the realignment, coregistration and normalization separately. So this would indeed be way more efficient.
Is the realignment across runs a valid way to go?



Hi @Sebastian,

no biggie!
Yes, exactly. The option register_to_mean=True will result in a two pass procedure:
–> initially SPM realigns each session to each other by aligning the first volume from each session to
     to the first volume of the first session and subsequently all volumes within each session are aligned to the      first volume of that session
–> after that the volumes from the first realignment step are used to create a mean image and than all volumes
     are aligned to that mean image

Using register_to_mean=False will “just” do the initial realignment.
In the example above the resulting mean image is registered to the t1 weighted image, yep.

I’m not sure if I completely understand your second question. Do you mean if it’s okay to align images across runs? If that’s the case:
Puh, that’s one hell of a question (at least for me, I hope others with more expertise will drop in as well).
If you have multiple runs of the same task/conditions and plan to analyze your data in a mass univariate way (GLM) across runs, than time series should correspond to “roughly” the same location/voxel within and between runs. Otherwise, chances are that the signal of a given voxel contains signal from two (or more) different voxels or even types of tissues, up to signal loss in voxels near the borders of the images (e.g. in frontal areas).
If you meant something else: sorry, could you maybe elaborate on that?

HTH, best, Peer

Hi @PeerHerholz ,

Thanks for your answer and sorry for the delayed response. So even though I’m selecting all functional runs at once nipype detects that they are different runs?

With regards to the second part, yes that’s what I meant. So if I plan to analyze my data using MVPA this approach is no longer valid?

Thank you very much for your help,



Hi, Peer,

I have an issue when preproccessing multi-session data using the following code:

templates = {'anat': 'sub-{subject_id}/ses-d1/anat/'
             'func': 'sub-{subject_id}/ses-{ses_id}/func/'

# Create SelectFiles node
sf = Node(SelectFiles(templates,
#sf.inputs.ses_id = 'd1'
# sf.inputs.ses_id = ['d1','d2','d3']
sf.inputs.task_id = 'exp'

subject_list = ['001', '002']
ses_list = ['d1','d2','d3']
sf.iterables = [('subject_id', subject_list),
                ('ses_id', ses_list)]

preproc.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
                 (sf, gunzip_func, [('func', 'in_file')])])

My question is about the layout of the output folder: I expected that the output will has two layers of the folders as below:


But it turned out only one layer and each folder combined both subject and session, see below.


Functionally, it’s OK, but I am wondering, can I somehow change the layout of the output folder as I expected?

the picture did not get attached.

1 Like

Hi, @satra,
Thanks for your response. Uploaded now :wink:

Dear @satra @PeerHerholz,
I posted my problem as a new post here: How to use datasink to get BIDS styple output folders structure?.
And found the issue myself a few minutes after posting it. :joy::joy: