Errors running QSIPrep on multi-shell diffusion sequence with two phase encoding directions

Summary of what happened:

We acquired a multi-shell diffusion sequence with two phase encoding directions (AP & PA). We tried running QSIPrep on a subject’s data, but encountered some errors which we unsure how to resolve. We would greatly appreciate any guidance from the QSIPrep community on how to proceed.

One error is related to the FOVs of the images. We see an open issue about this in QSIPrep’s GitHub Repo (see QSIPrep#273). To fix the error, we transferred the affine’s from the PA images to the AP images, as suggested here. Is this still the recommended approach, and are there any implications we should consider before proceeding? The comment also suggests using the --denoise-before-combining flag, but this doesn’t seem to be a valid QSIPrep flag.

Additionally, the documentation for HCP-style sequences recommends using both the --distortion-group-merge average and --combine-all-dwis flags. However, the second also does not appear to be valid.

Any clarification on the right approach to handle this FOV error and which flags to include would be much appreciated!

Command used (and if a helper script was used, a link to the helper script or the command generated):

qsiprep.sbatch

#!/bin/bash
#SBATCH -J qsiprep
#SBATCH --time=1-00:00:00
#SBATCH -n 1
#SBATCH --array 1 # TODO: update to include more subjects
#SBATCH --cpus-per-task=8
#SBATCH --mem-per-cpu=8G
#SBATCH -p russpold,hns,normal
# Outputs ----------------------------------
#SBATCH -o ./log/%x-%A-%a.out
#SBATCH -e ./log/%x-%A-%a.err 
#SBATCH --mail-user=logben@stanford.edu
#SBATCH --mail-type=END 
# ------------------------------------------

: ' 
Name: qsiprep.sbatch 
Description: Submits a single batch QSIPrep job for each subject, 
where the subject is determined by the line number in subs.txt
corresponding to the SLURM array ID. 
'

# Prepare paths
source ./setup_env.sh

# Prepare apptainer/singularity command line
APPTAINER_CMD="apptainer run --cleanenv -B $BIDS_DIR:/data -B $WORK_DIR:/work $APPTAINER_IMAGE"

# Compose subject ID for each array task
SUBJECT=$( sed "${SLURM_ARRAY_TASK_ID}q;d" ${SUBJECTS_FILE} )

# Compose the command line with flags for QSIPrep
# NOTE: Include: --distortion-group-merge average?
cmd="$APPTAINER_CMD /data $DERIVS_DIR participant --participant-label $SUBJECT -w /work --output-resolution 1.5 --skip-bids-validation --verbose"

# Show and execute the command
echo Running task ${SLURM_ARRAY_TASK_ID}
echo Commandline: $cmd
eval $cmd 
exitcode=$? # store exit code

echo Finished tasks ${SLURM_ARRAY_TASK_ID} with exit code $exitcode
exit $exitcode

setup_env.sh

#!/bin/bash

export VERSION="1.0.0rc1"
export IMAGE_NAME="qsiprep_${VERSION}"
export APPTAINER_IMAGE="${SCRATCH}/.apptainer/${IMAGE_NAME}.sif"

export PROJECT_DIR="${SCRATCH}/projects/rdoc_fmri"
export BIDS_DIR="${PROJECT_DIR}/data/bids"
export DERIVS_DIR="${BIDS_DIR}/derivatives/${IMAGE_NAME}"
export WORK_DIR="${PROJECT_DIR}/work/${IMAGE_NAME}"
export SUBJECTS_FILE="${PROJECT_DIR}/subs.txt"

mkdir -p "${DERIVS_DIR}"
mkdir -p "${WORK_DIR}"

Version:

QSIPrep version 1.0.0rc1 (docker pull pennlinc/qsiprep:1.0.0rc1)

Environment (Docker, Singularity / Apptainer, custom installation):

We are using apptainer and submitting the job on HPC.

Data formatted according to a validatable standard? Please provide the output of the validator:

Yes, the directory is in valid BIDS format. We received one warning about SliceTiming in one of our functional scans, which we chose to ignore:

bids-validator@1.14.6
	1: [WARN] You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. 'Slice Timing' is the time at which each slice was acquired within each volume (frame) of the acquisition. Slice timing is not slice order -- rather, it is a list of times containing the time (in seconds) of each slice acquisition in relation to the beginning of volume acquisition. (code: 13 - SLICE_TIMING_NOT_DEFINED)
		./sub-S1/ses-01/func/sub-S1_ses-01_task-rest_run-1_echo-1_bold.nii.gz
		./sub-S1/ses-01/func/sub-S1_ses-01_task-rest_run-1_echo-2_bold.nii.gz
		./sub-S1/ses-01/func/sub-S1_ses-01_task-rest_run-1_echo-3_bold.nii.gz

Relevant log outputs (up to 20 lines):

	ValueError: Field of view of image #1 is different from reference FOV.
	Reference affine:
	array([[  -1.5       ,    0.        ,   -0.        ,  105.15640259],
	       [  -0.        ,    1.5       ,   -0.        , -118.27559662],
	       [   0.        ,    0.        ,    1.5       ,  -21.9727993 ],
	       [   0.        ,    0.        ,    0.        ,    1.        ]])
	Image affine:
	array([[  -1.5       ,    0.        ,   -0.        ,  104.05000305],
	       [  -0.        ,    1.5       ,   -0.        , -105.34999847],
	       [   0.        ,    0.        ,    1.5       ,  -43.59999847],
	       [   0.        ,    0.        ,    0.        ,    1.        ]])
	Reference shape:
	(140, 140, 84)
	Image shape:
	(140, 140, 84, 107)

Screenshots / relevant information:

This is the directory structure of our BIDS dataset:

├── dataset_description.json
├── fieldmap.json
├── README.md
└── sub-S1
    ├── anat
    │   ├── sub-S1_run-1_T1w.json
    │   ├── sub-S1_run-1_T1w.nii.gz
    │   ├── sub-S1_run-1_T2w.json
    │   └── sub-S1_run-1_T2w.nii.gz
    ├── dwi
    │   ├── sub-S1_acq-G105_dir-AP_run-1_dwi.bval
    │   ├── sub-S1_acq-G105_dir-AP_run-1_dwi.bvec
    │   ├── sub-S1_acq-G105_dir-AP_run-1_dwi.json
    │   ├── sub-S1_acq-G105_dir-AP_run-1_dwi.nii.gz
    │   ├── sub-S1_acq-G105_dir-PA_run-1_dwi.bval
    │   ├── sub-S1_acq-G105_dir-PA_run-1_dwi.bvec
    │   ├── sub-S1_acq-G105_dir-PA_run-1_dwi.json
    │   └── sub-S1_acq-G105_dir-PA_run-1_dwi.nii.gz
    └── ses-01
        ├── fmap
        │   ├── sub-S1_ses-01_run-1_fieldmap.json
        │   ├── sub-S1_ses-01_run-1_fieldmap.nii.gz
        │   └── sub-S1_ses-01_run-1_magnitude.nii.gz
        └── func
            ├── sub-S1_ses-01_task-rest_run-1_echo-1_bold.json
            ├── sub-S1_ses-01_task-rest_run-1_echo-1_bold.nii.gz
            ├── sub-S1_ses-01_task-rest_run-1_echo-2_bold.json
            ├── sub-S1_ses-01_task-rest_run-1_echo-2_bold.nii.gz
            ├── sub-S1_ses-01_task-rest_run-1_echo-3_bold.json
            └── sub-S1_ses-01_task-rest_run-1_echo-3_bold.nii.gz

Hi @CogNeuro-LB,

Out of curiosity, why aren’t your dwi and anat files also in the session folder? Also, I presume the fieldmaps are linked (IntendedFor or B0FieldSource/Identifier) to the BOLD?

This is the default now, and not a flag. There is now a flag for separating DWI outputs instead.

What if you include that flag? Do you get the same error?

Can you also try 1.0.0?

In theory yes that is still the recommended approach. But I don’t have a good appreciation between how different the images are based on those affines. Usually the affine differences I see in these cases are smaller (difference on the order of decimals). There is chance that the gradient table may have to be adjusted.

Best,
Steven

Hi Steven! Thanks for your quick response.

why aren’t your dwi and anat files also in the session folder?

There isn’t a good reason for this. Normally, we would have the anat and dwi directories nested under a session directory. (In this particular case, S1 was a pilot subject, and the primary purpose of these data were just to evaluate our QSIPrep pipeline.)

I presume the fieldmaps are linked (IntendedFor or B0FieldSource/Identifier) to the BOLD

Yes, the fieldmaps are linked to the BOLD and dwi files. Does QSIPrep still need the relative paths in IntendedFor (instead of B0FieldSource/Identifier)? If so, would you suggest using IntendedFor for both func and dwi files, or IntendedFor for just the dwi files and B0FieldSource/Identifier for the func files, or something else?

This is the default now, and not a flag. There is now a flag for separating DWI outputs instead.

Thank you for the clarification!

Can you also try 1.0.0?

Yep! I’ll try 1.0.0.

There is chance that the gradient table may have to be adjusted.

Okay, thank you.

I’ll run QSIPrep v1.0.0 with –distortion-group-merge average on the FOV-corrected images and see how that fares.

Hi @CogNeuro-LB,

I am confused as to why you are linking your fmaps to the DWI as well, as I would have thought you would rather use the reverse phase encoded DWI pairs to correct each other. Is that your intention?

QSIPrep at this point only accepts relative path IntendedFor. fMRIPrep accepts both relative and BIDS URI intended for as well as the B0 metadata.

Best,
Steven

I am confused as to why you are linking your fmaps to the DWI as well, as I would have thought you would rather use the reverse phase encoded DWI pairs to correct each other. Is that your intention?

Sorry for the confusion. You’re right that the fieldmaps should not be linked to the dwi files in this case. But we do have a subset of subjects in which only one of the two DWI pairs is present. In which case, I think we should be linking the dwi files to the fieldmap, but let me know if that is not the case.

QSIPrep at this point only accepts relative path IntendedFor

Thank you!

Hi @CogNeuro-LB,

I guess it depends on how many subjects in your study. If it’s a good amount, you might just be better off using the fieldmap for everything, even if the SDC isn’t as good, just for consistency across the cohort.

Best,
Steven

Okay, that makes sense. Thank you for all the help! We’ll check back in as needed :slight_smile:

Best,
Logan

1 Like

Hey @Steven,

We ran QSIPrep v1.0.0 with the –distortion-group-merge average flag. The script executed successfully, but the report shows severe artifacts and suspiciously high mean framewise displacement (0.530). Might the large affine differences between the PA and AP images explain the quality of these results? If so, is there a work-around for this? Most of our scans have affine differences of similar magnitude.


Hi @CogNeuro-LB

What if you concat instead of average?

Best,
Steven

Hey @Steven,

If we run the script with –distortion-group-merge concat instead of –distortion-group-merge average, the script fails with the following error:

# QSIPrep command in outfile
Commandline: apptainer run --cleanenv -B /scratch/users/logben/projects/rdoc_fmri/data/bids:/data -B /scratch/users/logben/projects/rdoc_fmri/work/qsiprep_1.0.0:/work /scratch/users/logben/.apptainer/qsiprep_1.0.0.sif /data /scratch/users/logben/projects/rdoc_fmri/data/bids/derivatives/qsiprep_1.0.0 participant --participant-label S1 -w /work --output-resolution 1.5 --distortion-group-merge concat --skip-bids-validation --verbose
# Error that caused QSIPrep job to fail 
	 [Node] Error on "qsiprep_1_0_wf.sub_S1_ses_01_wf.sub_S1_ses_01_acq_G105_run_1_final_merge_wf.distortion_merger" (/work/qsiprep_1_0_wf/sub_S1_ses_01_wf/sub_S1_ses_01_acq_G105_run_1_final_merge_wf/distortion_merger)
250307-17:45:28,199 nipype.workflow ERROR:
250307-17:45:28,219 nipype.workflow ERROR:
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node distortion_merger.
	    raise DimensionError(len(niimg.shape), ensure_ndim)
	nilearn._utils.exceptions.DimensionError: Input data has incompatible dimensionality: Expected dimension is 4D and you provided a 3D image. See https://nilearn.github.io/stable/manipulating_images/input_output.html.
250307-17:45:30,197 nipype.workflow ERROR:
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node distortion_merger.
	    raise DimensionError(len(niimg.shape), ensure_ndim)
	nilearn._utils.exceptions.DimensionError: Input data has incompatible dimensionality: Expected dimension is 4D and you provided a 3D image. See https://nilearn.github.io/stable/manipulating_images/input_output.html.

Can you execute (on the login node is fine):

cat /scratch/users/logben/projects/rdoc_fmri/work/qsiprep_1.0.0/qsiprep_1_0_wf/sub_S1_ses_01_wf/sub_S1_ses_01_acq_G105_run_1_final_merge_wf/distortion_merger/_report/report.rst

There you’ll find the inputs to this particular node. It seems nilearn is complaining about one of your inputs for being 3D instead of 4D:

nilearn._utils.exceptions.DimensionError: Input data has incompatible dimensionality: Expected dimension is 4D and you provided a 3D image. See https://nilearn.github.io/stable/manipulating_images/input_output.html.

In addition to that, can you report the following?

nib-ls ${SCRATCH}/projects/rdoc_fmri/data/bids/sub-S1/dwi/*_dwi.nii.gz