How to set the JoinNode to merge the outputs of susan_smooth workflow iterables?

Hi everyone,

I use the susan_smooth workflow, on 8 functional task runs per subject. Now I want to examine the effects of different smoothing kernels, which I define as follows:

from nipype.workflows.fmri.fsl import create_susan_smooth
susan = create_susan_smooth()
susan.inputs.inputnode.fwhm = [4,6,8]

This works fine and I get 8 (runs) x 3 (fwhm levels) = 24 files as a result of the smoothing node.
Now I want to merge all 8 runs smoothed by each of the corresponding smoothing fwhm levels.
I assumed that a JoinNode would be the way to do it, maybe similar to this:

from nipype.interfaces.fsl import Merge
merge = pe.JoinNode(interface=Merge(), name='merge', joinfield=['in_files'], joinsource='???????')

However, it simply does not work and I suspect that it has to do with how the joinsource argument is defined. I tried joinsource='susan' , joinsource='susan.smooth.fwhm', joinsource='susan.smooth' but nothing seems to work. Could it be that the JoinNode only works if one iterable is defined per node? Because in the susan_smooth workflow the smooth node is defined as:

smooth = pe.MapNode(
        iterfield=['in_file', 'brightness_threshold', 'usans', 'fwhm'], name='smooth')

So the JoinNode does not know which iterables to join?

I know this seems like a simple problem to solve, but I am stuck on this.
Thanks in advance for your help!

If it helps, the error looks like this:

source = source[0]
IndexError: list index out of range

hi @lennart - create_susan_smooth will return a list of filenames, so all you should have to do is use a regular Node to Merge them.

from nipype.interfaces.utility import Merge
from nipype.workflows.fmri.fsl import create_susan_smooth

susan = create_susan_smooth()
susan.inputs.inputnode.fwhm = [4,6,8]
# your susan inputs

joiner = pe.Node(Merge(1), name='joiner')
joiner.inputs.ravel_inputs = True

# make a workflow to connect nodes
wf = pe.Workflow(name='smooth_wf')
wf.connect(susan, 'outputnode.smoothed_files', joiner, 'in1')

JoinNodes are only necessary if you are working with iterables - there is some good documentation on this and other use cases here

Hi @mgxd, thanks for you quick response.

I followed your suggestion, but the joiner node now joins all runs that were smoothed with all specified fwhm levels together (i.e., it merges all 8 (runs) x 3 (fwhm levels) = 24 files together and does not separate files corresponding to the respective fwhm levels). However, that is not what I want. I need all runs that were smoothed with the same fwhm level merged together (i.e., I want to end up with 3 merged files where the 8 smoothed runs with fwhm of 4, 6 and 8 mm are merged together, respectively). Sorry if I did not make this clear before. Any ideas? Thanks!

Ok, I found a solution. It seems somewhat hacky to me but it works. For anyone interested, here it is:

First, I found that the outputs of the susan_smooth workflow came in the following order: [run1fwhm4 run1fwhm6, run1fwhm8, run2fwhm4, run2fwhm6, run2fwhm8, run3fwhm4 ...] etc.
As stated above, my goal was to merge all runs smoothed by one of the respective fwhm levels together.

I created a Select node that has the index input defined as an iterable:

n_runs = 8
n_levels = 4
indices = [list(np.arange(level, n_runs * n_levels, n_levels))
           for level in range(0,n_levels)]
select = pe.Node(interface=util.Select(), name='select')
select.iterables = ('index', indices)

So indices will look like:

[[0, 4, 8, 12, 16, 20, 24, 28],
 [1, 5, 9, 13, 17, 21, 25, 29],
 [2, 6, 10, 14, 18, 22, 26, 30],
 [3, 7, 11, 15, 19, 23, 27, 31]]

Then I connect the Select node to my Merge node defined as:

from nipype.interfaces.fsl import Merge
merge = pe.Node(interface=Merge(), name='merge')
merge.inputs.dimension = 't' = 1.25

The nodes are connected as follows:

wf.connect(susan, 'outputnode.smoothed_files', select, 'inlist')
wf.connect(select, 'out', merge, 'in_files')
wf.connect(merge, 'merged_file', datasink, 'merge')

Thanks again @mgxd for your comment! It pointed me in the right direction towards the Nipype utitlity interfaces. If anyone has another solution I am of course still interested to see it!


hi all,

reviving this thread as it’s possible someone has found a nice fix for this since the last post! I’m dealing with a similar issue (have two iterables that get crossed, and I want to later join along only one of the iterables). I’ve got a BIDS-compliant dataset, and in nipype I’m using

  [("subject", range(n)),
  ("run", range(8))]

to grab data such that my level 1 workflow (FSL FEAT/FILMGLS) iterates over subject (n subjects) and run (say, 8 runs per subject).

that all works well! then, for level 2, I want to run FSL FLAME, iterating over each subject (but combining the 8 cope and 8 varcope images of interest, one of each per run). If I were to set up a Node for utility.Merge(), I’m worried that, as above, it would merge all subjects/runs’ filenames into one giant list, instead of n lists with 8 files each. Using a JoinNode for fsl.Merge() is another option, per Fixed-effects workflow for fmriprep outputs?, but from there it still looks like once you tell your JoinNode the joinsource input node, it will join over all the iterables if there are more than 1, again resulting in one giant list.

When you’re using more than one iterable in a workflow, and then later want to join along one or some subset of those iterables, is there a flexible solution beyond:

  • keeping the level 1 and level 2 workflows totally separate, where:
    • the level 1 workflow writes to datasink iterating over each subject/run
    • and then the level 2 workflow reads from a DataGrabber iterating over each subject
    • there would be two separate workflow graphs for this
  • creating a single mega-workflow, where:
    • each subject gets their own level 1 workflow that only iterate over run, joining to create one output per subject
    • and then feeding those outputs into a single level 2 workflow that iterates over subject
    • the graph for this would look like a giant funnel
  • creating a single mega-workflow, using a hard-coded Select node to hack the joining per @lennart’s solution above


thanks for your wisdom!