Issues with topup and eddy in qsiprep

Summary of what happened:

Hello qsiprep experts!

First of all, thanks for putting all this incredible pipeline together! I love it!!

I am trying to make it work for one of our datatsets and I am not sure I am setting this up correctly.
So, I slightly customized my eddy.param.json to run eddy_cuda.

{
  "flm": "quadratic",
  "slm": "linear",
  "fep": false,
  "interp": "spline",
  "nvoxhp": 1000,
  "fudge_factor": 10,
  "dont_sep_offs_move": false,
  "dont_peas": false,
  "method": "jac",
  "repol": true,
  "num_threads": 4,
  "is_shelled": true,
  "use_cuda": true,
  "cnr_maps": true,
  "residuals": true,
  "mporder": 27,
  "niter": 8,
  "estimate_move_by_susceptibility": true,
  "output_type": "NIFTI_GZ",
  "args": ""
}

and it looks like with this setting, qsiprep comes up with the following command:

eddy_cuda  \
--cnr_maps \
--field=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/hmc_sdc_wf/topup/fieldmap_HZ \
--field_mat=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/hmc_sdc_wf/topup_to_eddy_reg/topup_reg_image_flirt.mat --flm=quadratic \
--ff=10.0 \
--acqp=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/hmc_sdc_wf/gather_inputs/eddy_acqp.txt \
--bvals=/remote/iCARE/brainy/bids/sub-500/ses-01/dwi/sub-500_ses-01_acq-PA_dwi.bval \
--bvecs=/remote/iCARE/brainy/bids/sub-500/ses-01/dwi/sub-500_ses-01_acq-PA_dwi.bvec \
--imain=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/pre_hmc_wf/merge_and_denoise_wf/dwi_denoise_ses_01_acq_PA_dwi_wf/degibbser/sub-500_ses-01_acq-PA_dwi_denoised_mrdegibbs.nii.gz \
--index=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/hmc_sdc_wf/gather_inputs/eddy_index.txt \
--mask=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/hmc_sdc_wf/pre_eddy_b0_ref_wf/synthstrip_wf/mask_to_original_grid/topup_imain_corrected_avg_trans_mask_trans.nii.gz \
--interp=spline \
--data_is_shelled \
--resamp=jac \
--niter=5 \
--nvoxhp=1000 \
--out=/work/qsiprep_work/qsiprep_wf/single_subject_500_wf/dwi_preproc_ses_01_acq_PA_wf/hmc_sdc_wf/eddy/eddy_corrected \
--repol \
--residuals \
--slm=linear

where the eddy_acqp.txt is considering just the reversed b0 (but not the b0s in the dwi sequence… not even the 1st one):
0 1 0 0.059449
and eddy (eddy_index) is effectively using just the 1st b0 to correct for all volumes:

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Now, qsiprep recognizes that I have multiple b0s interleaved in my dwi sequence, besides the epi/intended for b0:

|    ├── sub-500_ses-01_acq-AP_epi_b0-00.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-00.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-105.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-120.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-15.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-30.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-45.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-60.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-75.nii.gz
│   ├── sub-500_ses-01_acq-PA_dwi_b0-90.nii.gz

Yet, it decides to use just the 1st one, as per eddy_index.txt above.

Can anyone tell me what I am doing wrong in setting qsiprep up?
Also, is there a way for me to pass my “eddy_index.txt”, “eddy_acqp.txt” and “slspec” to qsiprep?

Thanks!!
Amelia

Command used:

singularity run \
--nv \
-B /mnt/resist/ \
/mnt/resist/singimages/qsiprep-stable.sif \
/mnt/resist/bids_data/ \
/mnt/resist/ \
participant \
--fs-license-file /mnt/resist/dpreproc/FS_license.txt \
--freesurfer-input /mnt/resist/preproc/sub-100/ses-01/anat/ \
--anat-modality T1w \
--anatomical-template MNI152NLin2009cAsym \
--output_resolution 1.7 \
--denoise_method dwidenoise \
 --unringing-method mrdegibbs \
--distortion-group-merge concat \
--hmc-model eddy \
--eddy-config /home/versacea22/qsiprep_eddy_params/eddy_params_resist.json \
--write-local-bvecs \
--nthreads 12 --omp_nthreads 12 \
-w /mnt/resist/dpreproc/ \
--write-graph \
--participant_label 100 \
--skip-bids-validation \
--stop-on-first-crash

Version:

version: 0.21.4

Env:

Singularity

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

PASTE LOG OUTPUT HERE

Screenshots / relevant information:


Hi @A_V and welcome to neurostars!

In the future, please use the Software Support post category for questions like this, which prompt you for important information. You can see I edited your post to add it for you this time, but there is still information missing that you can edit your post to add.

What is this, your BIDS directory? If so, that doesn’t look BIDS valid.

Best,
Steven

Hi Steven,

Thanks for editing my post appropriately.

No, the b0s extraction is something that qsiprep did (in qsiprep_wf/single_subject_100_wf/dwi_preproc_ses_01_wf/hmc_sdc_wf/gather_inputs). I was just trying to understand the folds of the preprocessing.

This is what the bids data look like:

sub-100/
└── ses-01
    ├── anat
    │   ├── sub-100_ses-01_run-001_T1w.json
    │   └── sub-100_ses-01_run-001_T1w.nii.gz
    ├── dwi
    │   ├── sub-100_ses-01_dwi.bval
    │   ├── sub-100_ses-01_dwi.bvec
    │   ├── sub-100_ses-01_dwi.json
    │   └── sub-100_ses-01_dwi.nii.gz
    ├── fmap
    │   ├── sub-100_ses-01_acq-dMRIDistortionMap_dir-AP_epi.json
    │   ├── sub-100_ses-01_acq-dMRIDistortionMap_dir-AP_epi.nii.gz
    │   ├── sub-100_ses-01_acq-fMRIDistortionMap_dir-AP_epi.json
    │   ├── sub-100_ses-01_acq-fMRIDistortionMap_dir-AP_epi.nii.gz
    │   ├── sub-100_ses-01_acq-fMRIDistortionMap_dir-PA_epi.json
    │   └── sub-100_ses-01_acq-fMRIDistortionMap_dir-PA_epi.nii.gz
    ├── func
    │   ├── sub-100_ses-01_task-EDWM_run-001_bold.json
    │   ├── sub-100_ses-01_task-EDWM_run-001_bold.nii.gz
    │   ├── sub-100_ses-01_task-EDWM_run-001_events.tsv
    │   ├── sub-100_ses-01_task-EDWM_run-002_bold.json
    │   ├── sub-100_ses-01_task-EDWM_run-002_bold.nii.gz
    │   ├── sub-100_ses-01_task-EDWM_run-002_events.tsv
    │   ├── sub-100_ses-01_task-EDWM_run-003_bold.json
    │   ├── sub-100_ses-01_task-EDWM_run-003_bold.nii.gz
    │   ├── sub-100_ses-01_task-EDWM_run-003_events.tsv
    │   ├── sub-100_ses-01_task-EDWM_run-004_bold.json
    │   ├── sub-100_ses-01_task-EDWM_run-004_bold.nii.gz
    │   ├── sub-100_ses-01_task-EDWM_run-004_events.tsv
    │   ├── sub-100_ses-01_task-EDWM_run-005_bold.json
    │   ├── sub-100_ses-01_task-EDWM_run-005_bold.nii.gz
    │   ├── sub-100_ses-01_task-EDWM_run-005_events.tsv
    │   ├── sub-100_ses-01_task-emonback_run-001_bold.json
    │   ├── sub-100_ses-01_task-emonback_run-001_bold.nii.gz
    │   ├── sub-100_ses-01_task-emonback_run-001_events.tsv
    │   ├── sub-100_ses-01_task-emonback_run-002_bold.json
    │   ├── sub-100_ses-01_task-emonback_run-002_bold.nii.gz
    │   └── sub-100_ses-01_task-emonback_run-002_events.tsv
    └── sub-100_ses-01_scans.tsv

Hi @A_V,

What if you instead do not specify --hmc-model (which is default)? Then, according to the documentation:

If “none” the non-b0 images will be warped using the same transform as their nearest b0 image.

I also recommend adding -e to your singularity run command in general, to make sure no environment variables are being carried into the container by mistake.

And if you can update, do it, just to make sure there’s not a bug that’s already been addresed.

Best,
Steven

Hi Stephen,
Thanks for your feedback! After quite a few testing, here are the hurdles that I met.

The -e gave me a GLIBC_2.34 library issue which I did not have when running the singularity w/o the -e, so I went back to not using it.
Unfortunately, setting the -hmc-model none did not fix my issue with eddy using just the 1st b0.

cat qsiprep_wf/single_subject_104_wf/dwi_preproc_ses_01_wf/hmc_sdc_wf/gather_inputs/eddy_index.txt
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Besides, I got the message that topup needs eddy to be done and that I would have to use shoreline instead. Because I want topup corrections, I went back to -hmc-model eddy.

The other issue that I am facing is that when I run eddy_cuda to s2v, I need to set both --nthreads 1 --omp_nthreads 1 otherwise I get the error that eddy_cuda can just use 1 CPU. I thought that the multiple CPUs didn’t have anything to do with the GPU, but setting these options to 1 bypassed the issue. For reference the generated error is :

EddyInputError:  The version compiled for GPU can only use 1 CPU thread (i.e. --nthr=1)
        Terminating program

So, moving forward with only 1 CPU, as per below

singularity run --nv -B /work/brainy/,/mnt/resist/singimages /mnt/resist/singimages/qsiprep-stable.sif /work/brainy/bids_data /work/brainy/dproc participant --fs-license-file /mnt/resist/singimages/utilities/FS_license.txt --anat-modality T1w --anatomical-template MNI152NLin2009cAsym  --output_resolution 1.5 --denoise_method dwidenoise --unringing-method mrdegibbs --b0-motion-corr-to iterative --distortion-group-merge concat --eddy-config /work/brainy/eddy_params_brainy.json --pepolar-method TOPUP --write-local-bvecs --nthreads 1 --omp_nthreads 1 -w /work --write-graph --participant_label 102 --skip-bids-validation --stop-on-first-crash

The error that followed was that distortion_merger expects a 4D and not a 3D volume.

Node: qsiprep_wf.single_subject_102_wf.sub_102_ses_01_final_merge_wf.distortion_merger
Working directory: /work/resist/dproc/qsiprep_wf/single_subject_102_wf/sub_102_ses_01_final_merge_wf/distortion_merger

Node inputs:

b0_refs = <undefined>
b0_threshold = 100
bids_dwi_files = <undefined>
bval_files = <undefined>
bvec_files = <undefined>
carpetplot_data = <undefined>
denoising_confounds = <undefined>
dwi_files = <undefined>
harmonize_b0_intensities = True
raw_concatenated_files = <undefined>

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node distortion_merger.

Traceback:
        Traceback (most recent call last):
          File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
            runtime = self._run_interface(runtime)
          File "/usr/local/miniconda/lib/python3.10/site-packages/qsiprep/interfaces/dwi_merge.py", line 62, in _run_interface
            to_concat, b0_means, corrections = harmonize_b0s(self.inputs.dwi_files,
          File "/usr/local/miniconda/lib/python3.10/site-packages/qsiprep/interfaces/dwi_merge.py", line 610, in harmonize_b0s
            b0_mean = index_img(dwi_nii, b0_indices).get_fdata().mean()
          File "/usr/local/miniconda/lib/python3.10/site-packages/nilearn/image/image.py", line 664, in index_img
            imgs = check_niimg_4d(imgs)
          File "/usr/local/miniconda/lib/python3.10/site-packages/nilearn/_utils/niimg_conversions.py", line 415, in check_niimg_4d
            return check_niimg(
          File "/usr/local/miniconda/lib/python3.10/site-packages/nilearn/_utils/niimg_conversions.py", line 332, in check_niimg
            raise DimensionError(len(niimg.shape), ensure_ndim)
        nilearn._utils.exceptions.DimensionError: Input data has incompatible dimensionality: Expected dimension is 4D and you provided a 3D image. See https://nilearn.github.io/stable/manipulating_images/input_output.html.

So, after reading a bit, I decided to remove --distortion-group-merge concat because it seems like this is not intended for eddy. However, I got to the point where the pipeline is stalled (did not crashed) at

[Node] Error on "qsiprep_wf.single_subject_104_wf.dwi_finalize_ses_01_wf.transform_dwis_t1.local_grad_rotation" (/work/resist/dproc/qsiprep_wf/single_subject_104_wf/d                                 wi_finalize_ses_01_wf/transform_dwis_t1/local_grad_rotation)
exception calling callback for <Future at 0x7a9091c997e0 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 722, in _run_command
    result = self._interface.run(cwd=outdir, ignore_exception=True)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 388, in run
    self._check_mandatory_inputs()
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 275, in _check_mandatory_inputs
    raise ValueError(msg)
ValueError: LocalGradientRotation requires a value for input 'bval_files'. For a list of required inputs, see LocalGradientRotation.help()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 70, in run_node
    result["result"] = node.result
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 223, in result
    return _load_resultfile(
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/engine/utils.py", line 291, in load_resultfile
    raise FileNotFoundError(results_file)
FileNotFoundError: /work/resist/dproc/qsiprep_wf/single_subject_104_wf/dwi_finalize_ses_01_wf/transform_dwis_t1/local_grad_rotation/result_local_grad_rotation.pklz
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
FileNotFoundError: /work/resist/dproc/qsiprep_wf/single_subject_104_wf/dwi_finalize_ses_01_wf/transform_dwis_t1/local_grad_rotation/result_local_grad_rotation.pklz

Despite the alt, the eddy output was created, as so were the rotated bvec (and bval… ?), so I went ahead and removed the --write-local-bvecs…(Not sure I understand what this is for). Without --distortion-group-merge concat and --write-local-bvecs, eddy_cuda finished, but the correction is still based on the 1st b0 only and I cannot figure out how to ask qsiprep to use all my 6 b0s and generate an eddy_index.txt file that looks like this

1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	2	2	2	2	2	2	2	2	2	2	2	2	2	2	2	2	2	3	3	3	3	3	3	3	3	3	3	3	3	3	3	3	3	3	4	4	4	4	4	4	4	4	4	4	4	4	4	4	4	4	4	5	5	5	5	5	5	5	5	5	5	5	5	5	5	5	5	5	6	6	6	6	6	6	6	6	6	6	6	6	6	6	6	6	6

Any feedback on this would be greatly appreciated.
Thanks again!! Amelia