antsApplyTransforms does not update the FoV on a 4D BOLD timeseries

Summary of what happened:

  • I have 2 sessions each with a T1w, T2w, and BOLD acquisition:
drwxrwxr-x     - mpasternak 29 Jan 23:52   .
drwxrwxr-x     - mpasternak 29 Jan 23:43  ├──  V11
.rwxrwxrwx  1.8k mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V11_acq-Philips3T3DT1_T1w.json
.rwxrwxrwx  7.9M mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V11_acq-Philips3T3DT1_T1w.nii.gz
.rwxrwxrwx  1.7k mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V11_acq-Philips3T3DT2_T2w.json
.rwxrwxrwx  7.3M mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V11_acq-Philips3T3DT2_T2w.nii.gz
.rwxrwxrwx  2.7k mpasternak 18 Nov  2024  │   ├──  sub-GRN296_ses-V11_task-rest_acq-Philips3T_bold.json
.rwxrwxrwx   35M mpasternak 16 Sep  2024  │   └──  sub-GRN296_ses-V11_task-rest_acq-Philips3T_bold.nii.gz
drwxrwxr-x     - mpasternak 30 Jan 10:25  ├──  V12
.rwxrwxrwx  1.8k mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V12_acq-Philips3T3DT1_T1w.json
.rwxrwxrwx  8.7M mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V12_acq-Philips3T3DT1_T1w.nii.gz
.rwxrwxrwx  1.7k mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V12_acq-Philips3T3DT2_T2w.json
.rwxrwxrwx  8.4M mpasternak 12 Sep  2024  │   ├──  sub-GRN296_ses-V12_acq-Philips3T3DT2_T2w.nii.gz
.rwxrwxrwx  2.7k mpasternak 18 Nov  2024  │   ├──  sub-GRN296_ses-V12_task-rest_acq-Philips3T_bold.json
.rwxrwxrwx   43M mpasternak 16 Sep  2024  │   └──  sub-GRN296_ses-V12_task-rest_acq-Philips3T_bold.nii.gz
.rw-rw-r--   360 mpasternak 29 Jan 23:52  └──  V12_to_V11.txt

  • Intrasession, these are well aligned from the get-go. However, between sessions they are very far apart.

  • What I’m after: I’d like for the latter session to be brought roughly closer to the earlier session.

  • I have a text ITK transform file that rigid body registers the T1w of the latter session to the earlier one:

#Insight Transform File V1.0
#Transform 0
Transform: MatrixOffsetTransformBase_double_3_3
Parameters: 0.9975121222239797 0.06674008382751828 -0.022701489412543956 -0.06310718453215015 0.9889197095907575 0.1343701147858564 0.031417821070966113 -0.13260320383390728 0.9906711422159168 13.38436497025713 22.5374496279273 117.48154029742551
FixedParameters: 0 0 0
  • Outcomes:
    • T1w and T2w get transformed as expected. Their FoV, when plotting with nilearn or opening with a program like MRICroGL, accomodates the transformation.
    • This does not happen to the BOLD. It looks like it does change in world space, but the FoV is not updated to allow the image to be viewed.

Command used (and if a helper script was used, a link to the helper script or the command generated):

Here is a link to an HTML export of a jupyter notebook that attempted to use antsApplyTransforms: https://jade-trula-95.tiiny.site

I have followed the guidelines of using -e 3 for BOLD timeseries as mentioned in this Github issue: antsApplyTransforms on 4D BOLD images · Issue #1717 · ANTsX/ANTs · GitHub

For quick reference, here is the BOLD-specific command:

antsApplyTransforms \
    -d 3 \
    -e 3 \
    -i /home/mpasternak/Documents/TEST/V12/sub-GRN296_ses-V12_task-rest_acq-Philips3T_bold.nii.gz \
    -r /home/mpasternak/Documents/TEST/V11/sub-GRN296_ses-V11_task-rest_acq-Philips3T_bold.nii.gz \
    -t /home/mpasternak/Documents/TEST/V12_to_V11.txt \
    -o /home/mpasternak/Documents/TEST/V12/sub-GRN296_ses-V12_task-rest_acq-Philips3T_bold_registered.nii.gz \
    -n LanczosWindowedSinc --float

Version:

v2.5.4

Screenshots / relevant information:

Example of initial difference in inter-session layout for T2w (overlay is V12 same-modality):
image

After registration, everything works for T2w (and T1w):
image

The same cannot be said for BOLD:
image


Hi, is this reference image 3D or 4D? Since you are applying a 3D transform, you should use a 3D reference image. This can either be the T1w, or something like the mean of the BOLD time series in the space you want to resample to.

At the time of this post, it was a 4D image - specifically the same as the input since the intent was to keep the same space in the output. Indeed, your suggestion for using the mean image works…mostly.

There is one issue that creeps up if the mean image (or any 3D slice out of the 4D timeseries) is used as a reference, outlined in this other post from myself. In short, the output may be clipped since it keeps the FoV of the reference image. The only consistent workaround I’ve found is to adjust the reference image’s affine matrix translation values by adding/subtracting the net transformation matrix’s translation values as offsets. To be honest, this approach feels very black-boxy (i.e. why does the z-axis offset require a sign flippage before addition…is it due to ITK using a different convention?).

Regarding coordinates, ITK uses an LPS coordinate system (like DICOM), which differs from NIFTI coordinates. So that may account for differences in the affine parameters.

The application of warps is done in physical space, the sample points are then converted back to voxel space using the reference / input image headers to create the deformed image. The input (moving) and reference image needs to be in the same physical space as the fixed image in the registration.

If you registered V12 T1w to V11 T1w, the reference image would need to be in the same physical space as V11 T1w, and the input image would need to be in the same physical space as V12 T1w. If that’s not the case, you can register the mean BOLD to T1w (say with a rigid transform) and include that in the call to antsApplyTransforms:

-t V12_to_V11.txt -t V12_bold_to_V12_t1w.txt

It is fine to change the resolution or FOV of the reference image as long as you don’t change the physical space, for example you can use ResampleImageBySpacing on V11 T1w to make a reference image that has the same spacing as the BOLD. Or you can use ImageMath’s PadImage function to expand or crop the FOV. Both of these will adjust the headers such that the physical space remains unchanged.