FMRIPREP: spatial normalization

Hi,

I know I’ve already asked @effigies about this, but after speaking with my lab, I want to approach my question differently. Within fmriprep, I’ve set my output space to MNI152NLin2009cAsym, which means the preproc bold NiFTIs should be spatially similar to the MNI152NLin2009cAsym template, except the voxel dimensions are kept as the original bold NiFTI. So if I was to move to a group analysis, would I only need to resample the images to a 2x2x2 grid? Or should I perform another type of ANTs transform/registration? If only resampling is required, my PI is worried the registrations are not accurate enough, see linked images to overlay:


Additionally, if I wanted to do more processing with the bold NiFTIs in native space (e.g. get an rscore NiFTI map using a seed region) would I use the antsApplyTransforms command with the inverses of the ants_t1_to_mniComposite.h5 and an inverse of the affine.txt (bold_to_t1) to get the bold NiFTIs back into their native space?
^^
To the above point, should there be an option to keep the bold NiFTIs in native space from fmriprep (in addition to the T1w, fsnative, fsaverage*, and template options)? Then we could use our own template (older adult brains have a lot of variability) generated from ANTs, and complete our own registrations when we are ready (e.g. after completing some subject/session level analysis).
Arguments for/against?

One potential argument against: this splits the transform from head motion correction from the transform from the reference image to mni, which goes against the “do everything in one transform” philosophy.

Final thought: how hard would it be to allow users to input their own template image into fmriprep? like specify a path to the template image, or have a directory within BIDs to place templates, like /derivatives/templates/<BIDS_format>_res_XxYxZ_template.nii.gz

The coregistration does seem a little bit off. Could you share the HTML reports for those two runs?

As for your questions:

  1. To do group analysis in the MNI space you do not need to resample data or apply any additional transformations. Data from all your subjects are in the same space. You can run analyses and fit models directly to the data.

  2. I don’t see any benefit of doing analysis in the native bold space. Since transformation from bold to T1w is affine you can apply the same seed based analysis in the T1w space and the results will be the same. Furthermore since the affine matrices from motion correction and bold->T1w are combined, data in native bold and T1w would both have only one interpolation. Remember that we are applying the same trick of keeping the voxel sizes of the outputs in line with the raw data.
    Happy to be convinced otherwise.

  3. I don’t see how keeping the bold niftis in the native space would help you use a study specific custom ants template. Presumably the template would be calculated from T1w so you would want your data in T1w space anyway? Or did you want to do a bold template?

  4. Adding an option for users to submit their own template should not be hard to add - it was already discussed. See https://github.com/poldracklab/fmriprep/issues/487

1 Like

I also noticed some field distortion related problems in your data. If you don’t have fieldmaps you should try running FMRIPREP with --use-syn-sdc which will used constrained non-linear registration to try to correct for the distortions.

Attached html reports:


We do have fieldmaps, but now I am curious if we are applying them in the incorrect direction, and the registrations are not ideal, resolution pending: https://github.com/poldracklab/fmriprep/pull/694

  1. Thanks for the clarification!

  2. I think I was combining transformations and resampling, if it’s only a transformation, then I think that should work well. Thanks again.

  3. The template would be derived from the T1w so we would want our bold data to pass through T1w space anyway. In agreement

Thank you for the clarifications! I’ll check back tomorrow to make sure this resolves our questions.

Thanks for sharing the reports. There is something weird going on

Here’s my summary

  1. field unwarping is applied in the correct direction
  2. the coregistration reports reflect accurately the misalignment in the output nifti files (so I don’t think there is a problem with how we apply transformations)
  3. There was a skullstripping issues with the T1 incontrolGE140 as well as coregistration to bold failure (could you open an issue on github and share the raw data?)
  4. controlGE159 has really large ventricles which might cause ANTs to produce unusual warp fields. Nonetheless the coregistration with bold in this subject seems okish (definitely better than the other one)
  5. bold tissue contrast in both participants is really low

I think @ChrisGorgolewski assumed here that all your subjects are acquired with the same spatial resolution, so that the output MNI space happens to be the same for all subjects. If your subjects vary in image resolution, then you’ll need to select a common grid for all of them (through resampling, as you mentioned).

We could add a flag so that preprocessed data in native space is also written. The default would be not generating these files, but it could be useful for things like you propose or for testing purposes. I gues that the --output-space flag could be the proper place for this, allowing to set several output spaces.

The head motion estimate, the T1-to-BOLD affine and the T1-to-MNI deformation field are different mappings computed separately in FMRIPREP. So the “do everything in one transform” philosophy is still fulfilled since you are just applying the head motion correction (or the head motion correction plus BOLD-to-T1).

As Chris mentioned, that has been discussed. I don’t see any problem on implementing this, it would be nice to gauge the general interest in this feature.

1 Like

I still am not convinced about the advantage of doing analysis in subject BOLD space instead of subject T1w space. In our implementation both would have he same voxel size and require exactly one interpolation via an affine matrix (in case of T1w that is combination of motion correction matrix and bold->T1w matrix).

Thanks for the summary, I opened an issue here with a link to the BIDS subject.

3 posts were split to a new topic: fMRIPrep: preprocessed data in native BOLD space

2 posts were split to a new topic: fMRIPrep: spatial normalization failed