ANTs worked much better in one step! Example command: antsApplyTransforms -i /path/to/fmriprep/MNI_derivative.nii.gz -t /BIDS/derivative/qsiprep/sub-XX/anat/sub-XX_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5 -r /BIDS/derivative/qsiprep/sub-XX/dwi/sub-XX_run-XX_space-T1w_dwiref.nii.gz -o dwi_aligned_fmri_file.nii.gz
When I aligned anatomical files, I used the QSIprep T1w, and when aligning functional files I used the dwiref.
I used MNI fmriprep outputs, and then applied the MNI-to-T1 transform from QSIprep to align functional to diffusion. I have T1w space fmriprep outputs too, but figured that since fmriprep and QSIPrep use the same MNI space for outputs, the QSIPrep transform should work on the fmriprep MNI derivatives. The downside is that fmriprep MNI outputs have already been affine transformed so parts of their geometries may be distorted.
That’s what I was thinking too. If you’ve got T1w space outputs you can do a Rigid antsRegistration between the t1w boldref and the t1w dwiref and then antsApplyTransforms using that transform to get the T1w space bold data aligned with the dwi, skipping the two nonlinear transforms/interpolations going back and forth to MNI. The --interpolation LanczosWindowedSinc option is great too.
@mattcieslak
A rigid Antsregistration using T1-space derivatives worked much better! I used the Python distribution of ANTs since the command line syntax was a bit overwhelming. A video of the alignment is below. If you think something like this would be useful for others, I would be happy to share a minimal code example or help with documentation.
This looks awesome! One of the next reconstruction workflows for QSIPrep is going to be ingressing FMRIPrep output. Would you be interested in working with us on that PR? Or if you wouldn’t mind adding a section to the documentation I know at least a few others are working on a similar process right now.
I was reading the QSIPrep paper and came across this line:
The final resampling uses a Lanczos-windowed Sinc interpolation if the requested output resolution is close to the resolution of the input data. If more than a 10% increase in spatial resolution is requested, then a BSpline interpolation is performed to prevent ringing artifact.
Should this same consideration (the 10% rule) be applied to the functional data too? That is, if the transform transform to align fmri-to-dwi would result in the fmri being upsampled by more than 10%, should I use the BSpline interpolation?
I don’t have an evidence-based suggestion on this one. You could try one of each and look for ringing in the output of the sinc-based interpolation. BSpline is still pretty high quality. What did you end up using?
I am doing an analysis that meets the same needs as @Steven. We also have both functional and tractography data and would like to have both of these in the MNI152NLin6Asym space. I have already processed fmri data using fmriprep and was exploring pipelines to process our dti data. I am not sure if fmriprep outputs are ingressed in the current version of QSIprep reconstruction workflow as discussed above, but if not yet may you please share code example to achieve this transformation?
I was wondering if you could share the commands you used for this alignment? I have to align the outputs of fMRIprep and QSIprep and I’m looking to the optimal way to do it.
Sorry for the delayed response, I must have not seen the first message!
Here is a minimal working code snippet (with path names replaced by placeholders in ALL CAPS):
import ants
import numpy
fi_path = '/PATH/TO/DWIREF/FROM/QSIPREP.nii.gz'
fi = ants.image_read(fi_path)
mo = '/PATH/TO/BOLDREF/FROM/FMRIPREP.nii.gz'
mo = ants.image_read(mo_path)
xfm = ants.registration(fixed=fi, moving=mo, type_of_transform='Rigid', outprefix='/PATH/TO/OUTDIR/')
mo_path2 = '/PATH/TO/FULL/BOLD/RUN/TOBE/MOVED.nii.gz'
mo2 = ants.image_read(mo_path2)
moved = ants.apply_transforms(fi, mo2, transformlist=xfm['fwdtransforms'], interpolator='lanczosWindowedSinc', imagetype=3)
ants.image_write(moved, '/PATH/TO/OUTDIR/full_bold_moved.nii.gz')
fi is short for the fixed image, which in this case is the dwi reference, since we are aligning fmri-to-dwi. We first calculate the boldref-to-dwiref transformation, then in the second half of the code, apply that to a full BOLD run and save it out. It relies on antspy and numpy. Hope this helps!
Hello @Steven,
My apologies for reviving this old discussion. I have a related question I am hoping to ask. I’m trying to align the output in fmriprep and Qsiprep in space MNI152NLin2009cAsym and I’m a little overwhelmed by the various spaces like native space, MNI space. What I have tried is:
Run fmriprep with recon-all with default output space MNI152NLin2009cAsym.
Run Qsiprep with previous recon-all output for preprocessing and reconstruction in default output space T1W.
In my understanding, what I need to do are:
Rerun Qsiprep with specified output space MNI152NLin2009cAsym.
Run the above python script to registration fmri bold to dwi.
And I also wonder if the output of recon-all needed to be modified given the recon-spec is mrtrix_multishell_msmt_ACT-hsvs. Because I see that in log/citation.md of Qsirecon, it mentions that FreeSurfer outputs were registered to the QSIPrep outputs.
This is generally more difficult and not recommended compared to doing everything in T1w space. What is your use case for doing it in MNI?
QSIPrep does not have MNI outputs for DWI. Warping DWI images to MNI is difficult because you have to deal with rotating the gradient table based on the warp. If you can wait until the end of the pipeline, you can just register streamlines to MNI (DIPY : Docs 1.7.0 - Applying image-based deformations to streamlines), but still I recommend doing everything in T1w spcae.
What are you using the recon-all outputs for outside of fMRIPrep/QSIPrep pipelines that would require additional registration?
My goal is to consturcut the connectivity matrix of both functinal network and structural network and use them for further machine-learning based analysis. And it seems that many atlases are provided in MNI.
I find that in document of qsiprep it mentions we can use --output-spaces template and maybe I missunderstand it.
I’m sorry that I use the recon-all outputs just by following part of instructions in doc,doc2. As the table in doc2 says that the recon-spec mrtrix_multishell_msmt_ACT-hsvs needs freesurfer input.
In short,I want to use fmriprep and qsiprep to construct SC and FC networks. Thank you for your guidance - I appreciate your patience as I’m a beginner and continue learning neuroscience fundamentals and will provide additional details when needed.
QSIRecon registers atlases to your DWI image. So, you can keep your fmriprep output in MNI space and use an MNI atlas (I recommend XCP_D as a postprocessing pipeline), and keep your DWI in subject-space and let QSIRecon take care of registering atlases to your data.
That flag is deprecated.
You do not need to do anything to recon-all outputs, QSIRecon takes care of all registrations.
No problem! But since the original question has been answered, please open a new thread for additional questions.