Small differences in affines and bvecs bewtween AP and PA scans crash qsiprep

Summary of what happened:

Hi all,

I have a question for the diffusion experts.

I acquire diffusion-weighted data from a Simens scanner using the Free diffusion mode and optimized gradient table from Emmanuel Caruyer’s tool. I acquire the dataset twice (in the HCP-like format), with reversed phase-encoding directions: AP and PA. The two scans immediately follow each other and I create a copy reference to ensure the images have the same orientation. I select the “center of slice groups and saturation regions” option for the copy reference.

However, when converting the dicoms to bids, the affines and bvecs don’t match exactly. The difference is quite small (to the 6th decimal point), so I assume it is due to some computational inaccuracy. This is an example of the differences between the affines:

array([[ 0.00000000e+00,  3.11993062e-08,  1.10827386e-07,
        -7.62939453e-06],
       [-2.11875886e-08,  0.00000000e+00,  0.00000000e+00,
         0.00000000e+00],
       [ 1.05239451e-07,  0.00000000e+00,  0.00000000e+00,
        -9.91821289e-05],
       [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
         0.00000000e+00]])

This causes a crash in qsiprep while merging the two datasets (raw_rpe_concat) because the image FOV is different.

As a quick and dirty solution, I manually match one of the dataset’s sform and qform with nibabel and overwrite the bvecs with the other dataset’s values in the bids directory.

Is there a better way to deal with this, either at the acquisition stage or during preprocessing?

I would appreciate any tips on this! Also, big thanks for your time and effort on this forum! It’s been the greatest learning resource so far :slight_smile:

Best,
Roman

Command used (and if a helper script was used, a link to the helper script or the command generated):

qsiprep /input /output participant --output-resolution 1 --hmc-model eddy --eddy-config /config/eddy_params.json --fs-license-file /config/license.txt --pepolar-method TOPUP --denoise-method dwidenoise --unringing-method mrdegibbs --work-dir /output/work --nthreads 30 --skip-bids-validation --distortion-group-merge average

Version:

0.21.4

Environment (Docker, Singularity / Apptainer, custom installation):

Docker

Relevant log outputs (up to 20 lines):

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/envs/qsiprep/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/envs/qsiprep/lib/python3.10/site-packages/qsiprep/interfaces/nilearn.py", line 145, in _run_interface
	    new_nii = concat_imgs(self.inputs.in_files, dtype=self.inputs.dtype)
	  File "/opt/conda/envs/qsiprep/lib/python3.10/site-packages/nilearn/_utils/niimg_conversions.py", line 525, in concat_niimgs
	    for index, (size, niimg) in enumerate(
	  File "/opt/conda/envs/qsiprep/lib/python3.10/site-packages/nilearn/_utils/niimg_conversions.py", line 173, in _iter_check_niimg
	    raise ValueError(
	ValueError: Field of view of image #1 is different from reference FOV.
	Reference affine:
	array([[-1.04992306e+00, -7.69646838e-03, -1.05863577e-02,
         1.08450203e+02],
       [-3.13855452e-03,  9.66595292e-01, -4.29630697e-01,
        -7.62742691e+01],
       [-1.23085175e-02,  4.10041809e-01,  1.01257372e+00,
        -9.30755386e+01],
       [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
         1.00000000e+00]])
	Image affine:
	array([[-1.04992306e+00, -7.69649958e-03, -1.05864685e-02,
         1.08450211e+02],
       [-3.13853333e-03,  9.66595292e-01, -4.29630697e-01,
        -7.62742691e+01],
       [-1.23086227e-02,  4.10041809e-01,  1.01257372e+00,
        -9.30754395e+01],
       [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
         1.00000000e+00]])

Hi @romanbelenya, and welcome to neurostars!

Please see this thread which already addressed this: Different FOV errors - #2 by mattcieslak

I am not sure you want to be changing your bvec files though.

Best.
Steven

1 Like

Thanks for the tip, Steven!

Best,
Roman