I recently moved my data to BIDS format, and used the fMRIPrep docker for preprocessing. This is really cool! Thanks to the folks who made it possible.
My study is designed as a single session with four runs. Stimuli are randomized across the runs. Having read that GLM is not suitable for multiple runs, and having thought I would use FEAT for the analysis, I’m now at a loss.
My eventual goals are percent signal change in ROIs, and contrast images for conditions (i.e. subsets of my video stimuli fall within categories).
If it helps, the data are arranged thus:
│ ├── anat
│ │ ├── sub-15_T1w.json
│ │ └── sub-15_T1w.nii.gz
│ ├── dwi
│ │ ├── sub-15_dwi.bval
│ │ ├── sub-15_dwi.bvec
│ │ ├── sub-15_dwi.json
│ │ └── sub-15_dwi.nii.gz
│ ├── fmap
│ │ ├── sub-15_magnitude1.json
│ │ ├── sub-15_magnitude1.nii.gz
│ │ ├── sub-15_phasediff.json
│ │ └── sub-15_phasediff.nii.gz
│ └── func
│ ├── sub-15_task-ao_run-01_bold.json
│ ├── sub-15_task-ao_run-01_bold.nii.gz
│ ├── sub-15_task-ao_run-01_events.tsv
│ ├── sub-15_task-ao_run-02_bold.json
│ ├── sub-15_task-ao_run-02_bold.nii.gz
│ ├── sub-15_task-ao_run-02_events.tsv
│ ├── sub-15_task-ao_run-03_bold.json
│ ├── sub-15_task-ao_run-03_bold.nii.gz
│ ├── sub-15_task-ao_run-03_events.tsv
│ ├── sub-15_task-ao_run-04_bold.json
│ ├── sub-15_task-ao_run-04_bold.nii.gz
│ └── sub-15_task-ao_run-04_events.tsv
I see that AFNI has tools for concatenating runs. Is this the general recommendation?
You can concatenate runs, but there are some pretty solid arguments against it. For example, there’s substantial autocorrelation in fMRI data which most analysis packages attempt to deal with in one way or another, but simply concatenating runs together creates breaks in that autocorrelation structure. You probably don’t want to treat the motion that happens between two runs the same way you treat the motion that happens within a run, because the effects on your data from that motion aren’t the same. And so on.
What many (probably most, at this point) people recommend doing is modeling each run separately, then combining the outputs from those runs in a fixed-effects model for the subject, then doing your group analysis as a mixed-effects model that takes both the parameter estimates and their variance estimates from each subject into account. (FEAT’s user guide details this: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FEAT/UserGuide)
This works best if you have your stimulus conditions equally distributed among your runs. If you don’t, things become a little more complicated.
Complicated? That’s kind of where I seem to live
My stimuli repeat once, and are pseudo-randomly distributed across the four runs. The same for each subject.
Thanks for the info, and for steering me in the right direction!
I have a similar issue (2 runs per task). @JohnAtl did this suggestion work out for you? How did you go about combining the separately modeled runs?