So, after different discussions with different colleges, I figured out normalization should be done as a last step.
I am running a dual BOLD-ASL sequence, and my goal is to use both BOLD data and ASL data to compute one dataset, after regression (first level analysis).
I have two options :
- Normalizing before regression, adding twice the noise in my computation
- Normalizing after combining, so that I only transform the data once.
What I would like to know is if the transformation (I use afni 3dVolreg) can work as well with data that isn’t BOLD anymore, but a percent signal change.
If the normalization would work the same, would it be relevant to do it just before the second level analysis ?
Thanks for any help
Perhaps you’ll find this thread helpful: Warping fMRI statistical maps? (EPINorm'ing)
Thanks for your reply and the link to the chat and publications.
It seems like it is okay to do so, but I didn’t read anyone positively stating it.
In one of the article, they mention doing the normalization on ROI, which makes sense too, and this means that it was done posterior to first level analysis.
Do they apply the transform to beta coefficients or percent signal change ?
Or even as I read it. No warping at all…
it will be trickier to identify the regions though
(Spatial) normalization, AKA alignment or registration can be done at many points in an analysis flow.
- The first question is what kind of alignment to do: rigid body (translation+rotatio, = 6 degree of freedom), which is what 3dvolreg does; linear affine (translation+rotation+scaling+shearing, 12 degrees of freedom), which is what 3dAllineate or flirt does; or nonlinear (typically global affine, followed by localized refinement over “patches”, meaning that hundreds-to-bajillions of degrees of freedom are used, depending the specifics and warping scale), such as 3dQwarp, @SSwarper, fnirt, ANTS, etc. The spatial scale and voxel size of your data affects your choice of alignment, as does the amount of image contrast: I don’t see much value in doing nonlinear alignment with most EPI datasets, given low spatial contrast, for example.
- The second question is what cost function to use: do your “source” and “base” datasets have similar or differing spatial contrasts? The typical 3dvolreg usage is for EPI-to-EPI alignment across time, so one assumes veeery similar contrast, and the default “ls” (least squares) cost function is generally fine. For alignment of similar contrasts across different sessions or exact acquisition type, in AFNI we might recommend “lpa” or “lpa+ZZ” for local Pearson alignment (not available in 3dvolreg, but in 3dAllineate and 3dQwarp)—see Saad et al., 2009; for datasets with different contrast, we would use “lpc” or “lpc+ZZ”, which is what we recommend by default for EPI-anatomical alignment in most cases (see again the Saad et al. paper or Taylor et al. 2018.
- Thirdly, what interpolating kernel should be used when applying alignment? Volumetric MRI datasets are on discrete grids, and each alignment process either changes the grid or moves the data around on the same grid—this is an inherently blurring process, meaning that spatial specificity will be lost. You can choose to a bit about how much smoothness (but losing edges) or sharpness (but introducing ringing) to keep in real-valued data; or, for integer-valued dsets like ROI maps, you can choose to keep integer values as integers—in AFNI, we call this “nearest neighbor” interpolation.
- Fourthly, there is the issue of how to integrate the alignment in your processing. This matters a lot because of the point made in #3 above: any nontrivial regridding or alignment introduces blurring/smoothing. (I think this might have been referenced as “noise” in the original post.). Because of this, you likely want to minimize the number of separate alignments you have. In afni_proc.py, for example, we separately calculate EPI-EPI alignment for motion correction/adjustment, EPI-anatomical alignment for spatial specificity and anatomical-template alignment for standard space-izing, but before applying any in the final processing, we concatenate all alignments into a single warp and then just regrid once. This minimizes data loss/blurring.
OK, to your question here:
- minimizing blurring from alignment is best—if you can concatenate all transforms before applying them, that would be best.
- making sure you have maximal image contrast would be best: even if your dsets have different tissue contrasts, AFNI has cost functions to handle that well. But if you have low spatial contrast (where it is hard to see sulci, gyri, tissue boundaries, etc.), then any alignment will be tricky, esp. if you are aligning between different brains (like a subject to template).
My guess is that BOLD percent signal change will have low spatial contrast, so aligning that kind of subject data to a template dataset will not give great results. It would be better to align a dataset that has clear, visible structure to it. As a side benefit of doing alignment earlier in your processing, you might be able to contatenate your transforms before applying them, which would be beneficial to outcomes.