I am facing some difficulties in setting up a GLM workflow that runs separately over all subjects and epi runs in a dataset.
The first step, splitting the workflow for the subjects works fine. To make this work, I basically followed the Nipype tutorial on iterables and used this code:
# First, let's specify the list of subjects subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05'] from nipype import IdentityInterface infosource = Node(IdentityInterface(fields=['subject_id']), name="infosource") infosource.iterables = [('subject_id', subject_list)]
However, next I would want to do basically the same step on the runs, separately for each subject. Unfortunately, not all subjects have the same number of runs so that I can’t hardcore a list with run numbers in advance. Instead, I would have to check on runtime how many runs were found for a given subject and work with that information. Here is where the problems start. My first idea was to specify an
iterables field for the input node of the
run workflow on runtime, but this didn’t work as the
iterables field need to be specified beforehand, it seems.
So following this suggestion by @satra, I currently was planning to use plain python to loop over subjects and create separate subjects workflows that branch out for each epi run. However, this has the disadvantage that it will be a little annoying to parallelize the subject workflows and to link this first-level workflow to higher level analysis.
Given that it is rather common to have multiple runs per subject, I was wondering whether someone has experience with such a dataset and has some suggestions what the best route to take here is.