This post was made in the context of an analysis using EPINorm. The conversation shifted to valid ways of implementing this analysis in
afni_proc.py, hence the selected answer.
Can I convolve fMRI data in native-subject space, then warp the statistical maps to template space? or does the convolution need to happen in template space itself?
Can you describe a bit more about what your main goals are with your analysis? While in theory either option should produce comparable results, in general it is good to minimize the amount of data interpolation. fMRIPrep, for example, is nice in that it performs all registrations all in a single process (rather than data jumping from space A → some middleman space → space B).
I’m currently hashing around ideas for how to do processing using the EPINorm approach and
afni_proc.py. One of the ideas floated by the team was to do processing in native-space, then warp the statistical maps, however @oesteban 's comment on this thread made me doubt that idea.
I am convinced that I have to do processing in template space, however I am puzzling on how to do motion-correction + registration prior to
afni_proc.py, since there is no option to inject
3dvolreg output (with the exception of
-regress_censor_extern). I am guessing I’ll have to create a custom processing script in order to do EPINorm with afni tools.
You could do an EPINorm style analysis in
afni_proc.py supplying the
-copy_anat option with a subject EPI template and the
-tlrc_base with whatever EPI template you want to use. You might want to adjust the the cost function used for the alignment using
-align_opts_aea -cost XXX where you chose the cost function that’s best for your EPI to (subject) EPI template. You may also need to tweak the nonlinear parameters as well, although I’d give the defaults a chance depending on your template. For tweaks, I would recommend first a run through
@SSwarper with your subject EPI to the template EPI, and then adjust cost values as necessary.
afni_proc.py would copy your movement/volreg and registration over to the regression and also benefit from a single interpolation of your functional data since we concatenate the transforms together before applying them. The alternate of using an external software or separate steps for motion + registration will result in another interpolation step and possibly more blur.
FWIW, and even as the EPINorm paper says, the benefit of EPINorm is less (both in their paper and my personal experience) when using distortion correction, which can be setup also in
afni_proc.py via the
To the question of spatial normalization before or after statistics - I’d ping Gang Chen over on the afni message boards since I know he’s given this considerable thoughts: https://discuss.afni.nimh.nih.gov
These two papers give some food for thoughts on this topic if you didn’t read them yet…
Ng, B.; Abugharbieh, R.; McKeown, M. J. Adverse Effects of Template-Based Warping on Spatial FMRI Analysis; Hu, X. P., Clough, A. V., Eds.; Lake Buena Vista, FL, 2009; p 72621Y. https://doi.org/10.1117/12.811422.
Özcan, M.; Baumgärtner, U.; Vucurevic, G.; Stoeter, P.; Treede, R.-D. Spatial Resolution of FMRI in the Human Parasylvian Cortex: Comparison of Somatosensory and Auditory Activation. NeuroImage 2005, 25 (3), 877–887. Redirecting.
If I am using the fMRI run itself as a template (i.e. the first volume, as in the Calhoun et al. 2017 paper), would it make sense to forgo the
As you wrote, if you’re using the standard
afni_proc.py pipeline, then you’re already doing an alignment to that first volume with the volreg step.
3dvolreg does your rigid-body alignment to whatever volume you ask it to use. The align step does the EPI to anatomical (or in this case the EPI for subject template). I would suspect you’d want to use some modification of the volume for this
align such as using the mean (e.g.
3dmean) of the entire run (post motion correction) or even an average of all runs as your subject template (assuming you have multiple runs). My reading of EPINorm and offshoots would be to use the average EPI over all the runs of that subject as your
-copy_anat option and then to build a template of all EPIs for all subjects via AFNI (
@toMNI_Qwarpar), ANTS (
antsMultivariateTemplateConstruction2.sh), or Template-O-Matic (Template-O-Matic: a toolbox for creating customized pediatric templates - PubMed). You could do what the article did and use the MNI EPI Template as well.
How does this pipeline strike you? I’m sticking pretty close to the description in Calhoun et al. since I suspect you’re going to have to answer to reviewers.
That sounds good. I will definitely try that! (I’ll be using
3dTstat in lieu of
Here is my most recent iteration. Per @pmolfese’s suggestion, made a single-subject epi template using
3dTstat. Refined it a bit by using
3dTstat. The results look great so far!
MNI 152 SurfVol overlaid with “group mask” output
Motivation for using the MNI EPI template instead of a study-specific template is so we can take advantage of the MNI suma for visualization (and ROIs). Would this entail additional processing considerations?
(only additional processing I might need to do is
-tlrc_NL_warp. since I based the initial script off of
uber_subject.py output, I’ve been doing affine registration only.)
That looks pretty solid! I don’t foresee any major processing hurdles with using the MNI EPI template and then visualizing on the SUMA surfaces for MNI. You might need to do a tiny alignment since I can’t recall if the MNI EPI Template is exactly aligned to the MNI template in AFNI (AFNI makes a distinction between MNI and MNI-ANAT), but any alignment can be done with the script
I think you want affine warps from single run EPIs to the single-subject average EPI. And then yes you can (and maybe should) do the nonlinear from single subject EPI to MNI EPI. That will be highly dependent on how much texture is on each of those EPI templates.
All this said, you might also consider doing a standard surface-based analysis just to get the cortical activations, even as you use the MNI EPI for the subcortical findings.