Aligning fMRIPrep and QSIPrep Outputs

Hello,

I will be doing an analysis that requires functional and tractography data to be in native space or some rigid transform thereof (so any common space that does not require affine spatial normalization such as MNI). QSIPrep ACPC aligns data, so outputs are rotated compared to fMRIPrep’s. Since tractography derivatives are the largest, I probably want to align functional data to DWI to limit computational needs, and I imagine using the T1s is the best way to go about this. However, since neither of the output T1s in anat are skull stripped, a simple flirt will has not sufficed. Anyone have suggestions for getting the most reliable transform between subject-space outputs of the two softwares? Should I just skull trip the two images and then try to align? Can I make use out of knowing how both images are transformed to MNI? Thanks in advance!

Best,
Steven

Hi Steven,
I think your best option would be to skip the T1ws altogether and directly align the boldref from each bold scan to the dwiref from the preprocessed DWIs. They should have somewhat similar voxel sizes, contrast and skull content.

A couple things to be aware of - the fmriprep images are in RAS+ orientation and the qsiprep images are in LPS+ orientation. Not all image registration tools handle this well, so be sure to run some tests before applying it to all your data. If you go with an antsRegistratation you can use the “Rigid” transformation. Also, if you’re doing connectivity matrices, you don’t need to align bold/dwi at all.

Let me know what ends up working best, this would be great to include in the documentation!

Thanks I’ll try this out! Another thing I have considered is to align the MNI outputs of FMRIPrep by applying the inverse T1-to-MNI transform from QSIPrep, but finding the relevant info in the h5 file has been difficult.

Running flirt twice (that is, running flirt on the output of a flirt output) seemed to result in a good alignment between BOLD and DWI refs, so multiplying the two transformation matrices and then applying to the BOLD should do the trick. I wonder if ANTs would work better in a single step.

You can use the MNI-to-T1w h5 file directly as the argument for the --transform flag in antsApplyTransforms. It’s one of the ways ANTs can store nonlinear and linear transforms in the same file.

I prefer ANTs for spatial operations because they have really nice interpolation options and they handle image io consistently. That being said, if you’ve got flirt working already then go for it!

ANTs worked much better in one step! Example command:
antsApplyTransforms -i /path/to/fmriprep/MNI_derivative.nii.gz -t /BIDS/derivative/qsiprep/sub-XX/anat/sub-XX_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5 -r /BIDS/derivative/qsiprep/sub-XX/dwi/sub-XX_run-XX_space-T1w_dwiref.nii.gz -o dwi_aligned_fmri_file.nii.gz

When I aligned anatomical files, I used the QSIprep T1w, and when aligning functional files I used the dwiref.

Do you have T1w or native space fmriprep outputs?

I used MNI fmriprep outputs, and then applied the MNI-to-T1 transform from QSIprep to align functional to diffusion. I have T1w space fmriprep outputs too, but figured that since fmriprep and QSIPrep use the same MNI space for outputs, the QSIPrep transform should work on the fmriprep MNI derivatives. The downside is that fmriprep MNI outputs have already been affine transformed so parts of their geometries may be distorted.

That’s what I was thinking too. If you’ve got T1w space outputs you can do a Rigid antsRegistration between the t1w boldref and the t1w dwiref and then antsApplyTransforms using that transform to get the T1w space bold data aligned with the dwi, skipping the two nonlinear transforms/interpolations going back and forth to MNI. The --interpolation LanczosWindowedSinc option is great too.

Yah that’s a good idea, I’ll give that a shot.

@mattcieslak
A rigid Antsregistration using T1-space derivatives worked much better! I used the Python distribution of ANTs since the command line syntax was a bit overwhelming. A video of the alignment is below. If you think something like this would be useful for others, I would be happy to share a minimal code example or help with documentation.

Thanks,
Steven
align_vid

This looks awesome! One of the next reconstruction workflows for QSIPrep is going to be ingressing FMRIPrep output. Would you be interested in working with us on that PR? Or if you wouldn’t mind adding a section to the documentation I know at least a few others are working on a similar process right now.

Sure I’d be happy to help in anyway I can! We can speak offline about that.

I was reading the QSIPrep paper and came across this line:

The final resampling uses a Lanczos-windowed Sinc interpolation if the requested output resolution is close to the resolution of the input data. If more than a 10% increase in spatial resolution is requested, then a BSpline interpolation is performed to prevent ringing artifact.

Should this same consideration (the 10% rule) be applied to the functional data too? That is, if the transform transform to align fmri-to-dwi would result in the fmri being upsampled by more than 10%, should I use the BSpline interpolation?

Thanks,
Steven

I don’t have an evidence-based suggestion on this one. You could try one of each and look for ringing in the output of the sinc-based interpolation. BSpline is still pretty high quality. What did you end up using?

In the video I posted I used LanczosWindowedSinc, I can try BSpline and compare.

Hello Steven and team,

I am doing an analysis that meets the same needs as @Steven. We also have both functional and tractography data and would like to have both of these in the MNI152NLin6Asym space. I have already processed fmri data using fmriprep and was exploring pipelines to process our dti data. I am not sure if fmriprep outputs are ingressed in the current version of QSIprep reconstruction workflow as discussed above, but if not yet may you please share code example to achieve this transformation?

Thank you,
Sneha

Hi,

I was wondering if you could share the commands you used for this alignment? I have to align the outputs of fMRIprep and QSIprep and I’m looking to the optimal way to do it.

Best regards,

Manuel

Hi @mblesac and @snp2003,

Sorry for the delayed response, I must have not seen the first message!

Here is a minimal working code snippet (with path names replaced by placeholders in ALL CAPS):

import ants
import numpy

fi_path = '/PATH/TO/DWIREF/FROM/QSIPREP.nii.gz'
fi = ants.image_read(fi_path)

mo = '/PATH/TO/BOLDREF/FROM/FMRIPREP.nii.gz'
mo = ants.image_read(mo_path)

xfm = ants.registration(fixed=fi, moving=mo, type_of_transform='Rigid', outprefix='/PATH/TO/OUTDIR/')

mo_path2 = '/PATH/TO/FULL/BOLD/RUN/TOBE/MOVED.nii.gz'
mo2 = ants.image_read(mo_path2)

moved = ants.apply_transforms(fi, mo2, transformlist=xfm['fwdtransforms'], interpolator='lanczosWindowedSinc', imagetype=3)

ants.image_write(moved, '/PATH/TO/OUTDIR/full_bold_moved.nii.gz')

fi is short for the fixed image, which in this case is the dwi reference, since we are aligning fmri-to-dwi. We first calculate the boldref-to-dwiref transformation, then in the second half of the code, apply that to a full BOLD run and save it out. It relies on antspy and numpy. Hope this helps!

Steven

4 Likes

Thank you so much @Steven. It certainly will.