I work with Zeynab and we were exploring a quick workaround to look at the data in native space by inverting the transformations.
However, I am having trouble getting antsApplyTransforms to take the affine matrix created by bbregister. I can get the *preproc.nii.gz bold image in MNI space registered to T1 space using the .h5 file. But antsApplyTransforms doesn’t like any of the affine .txt files. I’ve tried using the affine.txt file located in the /fsl2itk/ folder in workflows and the *T1_affine.txt file (inverting it) located in the /anat/ output folder.
Obviously it’s much simpler to just be able to have the native space be the output, but I’m curious as to why the transformations aren’t working.
terminate called after throwing an instance of 'itk::ImageFileWriterException'
Could not create IO object for writing file /autofs/space/rainier_002/users/harris/CTS/data/BIDS_output/tmp_workdir/fmriprep_wf/single_subject_HV03baseline_wf/func_preproc_task_rest_wf/bold_reg_wf/bbreg_wf/fsl2itk_inv/affine.txt
Before I try to help you with antsApplyTransforms lets make sure that what you are trying to do is the right thing to do.
For people looking for native/EPI/BOLD space in FMRIPREP outputs we recommend using the T1w output space. It is aligned with the participants T1w template which allows for easy derivation of anatomical ROI, uses the resolution of the raw input data (which saves space), and was created using only one interpolation (minimizing smoothing, by combining transforms). There are very few legitimate cases where native/EPI/BOLD space would be needed instead of T1w space.
What you are attempting to do is to go from MNI to native/EPI/BOLD space. This has a few problems:
nonlinear transformations can be lossy, multiple voxels are squished into one and reversing this cannot be done without losing some information
applying additional interpolation to the data will introduce some smoothing
Thanks for the quick response! Yes, I agree with all the points you raised. Selecting the T1w space as the output would definitely be the way to go if we are hoping to look at native space with fmriprep in its current form.
Mostly I was just trying to check my understanding of how the transformations work and was confused why it wasn’t working, but I’m also thinking about certain cases where one might want to look at a how a group level result from MNI space maps to individual subject spaces. Though based on your points that can probably be done by just transforming back to T1w space rather than all the way back to the original bold space.
Yes that was it, left it out somehow. It runs but something isn’t right because the resulting image appears all but completely shifted out of view. Only a small sliver of the brain is visible on the edge of the image.
But it seems to be applying the transforms correctly:
The composite transform comprises the following transforms (in order):
Perhaps the reference is at fault - you might need to dig up the BOLD reference image in the working directory. This is the one that is used for bold<->T1w registration - it might have different sform/qform which could lead to these issues.
3 years later I went to the same topic (sry for digging this out).
In case someone want to do it and don’t have the temporary fmriprep work files anymore, I may have a workaround.
The bold_ref and the original data might have different sform/qform (probably due to the fslsplit step). You can generate a nifti to serve as reference for the ApplyTransform using fslroi (faster than fslsplit in case of a long bold run) :