DSI Studio error in qsiprep

Summary of what happened:

This issue crops up in the middle of processing and has to do with DSI studio.

Command used (and if a helper script was used, a link to the helper script or the command generated):

singularity run --cleanenv \
    -B ${OUTPUT_DIR}:/derivatives:ro,${OUTPUT_DIR}:/out,${WORK_DIR}:${WORK_DIR},${FS_LICENSE}:/opt/freesurfer/license.txt     /wynton/group/rsl/utils/software/qsiprep_${VERSION}.sif \
    /derivatives/nii /out participant \
    --fs-license-file /opt/freesurfer/license.txt \
    --output-resolution 1.7 \
    --work-dir ${WORK_DIR} \
    --unringing-method mrdegibbs --denoise-method dwidenoise --nthreads ${ncpus} \
    --participant_label ${subjectID} \
    --separate_all_dwis \
    --skip-anat-based-spatial-normalization \
    --omp-nthreads $((NTHREADS - 2)) \
    -v -v

Version:

0.19.0, 0.19.1, and 0.20.0

Environment (Docker, Singularity / Apptainer, custom installation):

Singularity

Data formatted according to a validatable standard? Please provide the output of the validator:

	Please visit https://neurostars.org/search?q=README_FILE_MISSING for existing conversations about this issue.


        Summary:                 Available Tasks:        Available Modalities: 
        8 Files, 333.18MB                                T1w                   
        1 - Subject                                      dwi                   
        1 - Session                                                            


	If you have any questions, please post on https://neurostars.org/tags/bids.

Making sure the input data is BIDS compliant (warnings can be ignored in most cases).
240430-17:11:01,733 nipype.workflow INFO:
	 Running with omp_nthreads=-2, nthreads=4
240430-17:11:01,737 nipype.workflow IMPORTANT:
	 
    Running qsiprep version 0.19.1:
      * BIDS dataset path: /derivatives/nii.

Relevant log outputs (up to 20 lines):

	 ***********************************
240430-20:19:34,813 nipype.workflow ERROR:
	 could not run node: qsiprep_wf.single_subject_PR01_wf.dwi_preproc_ses_1021_acq_HARDI_wf.pre_hmc_wf.dwi_qc_wf.raw_gqi
240430-20:19:34,822 nipype.workflow INFO:
	 crashfile: /out/qsiprep/sub-PR01/log/20240430-171101_f375c5ac-d180-4a1c-8218-a652a0080569/crash-20240430-175742-subanerjee-raw_gqi-4ba506e5-722e-4bf1-af0d-72f8f7a1f945.txt
240430-20:19:34,822 nipype.workflow ERROR:
	 could not run node: qsiprep_wf.single_subject_PR01_wf.dwi_finalize_ses_1021_acq_HARDI_wf.transform_dwis_t1.calculate_qc.raw_gqi
240430-20:19:34,830 nipype.workflow INFO:
	 crashfile: /out/qsiprep/sub-PR01/log/20240430-171101_f375c5ac-d180-4a1c-8218-a652a0080569/crash-20240430-201620-subanerjee-raw_gqi-c1e8d7b5-d8ef-4ad5-9a31-855eded82ba2.txt
240430-20:19:34,831 nipype.workflow ERROR:
	 could not run node: qsiprep_wf.single_subject_PR01_wf.dwi_finalize_ses_1021_acq_HARDI_wf.final_denoise_wf.calculate_qc.raw_gqi
240430-20:19:34,838 nipype.workflow INFO:
	 crashfile: /out/qsiprep/sub-PR01/log/20240430-171101_f375c5ac-d180-4a1c-8218-a652a0080569/crash-20240430-201832-subanerjee-raw_gqi-7126218c-6e49-4b0e-abee-52daa23885aa.txt
240430-20:19:34,839 nipype.workflow INFO:
	 ***********************************
QSIPrep failed: 3 raised. Re-raising first.
RuntimeError: Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node raw_gqi.

Cmdline:
	dsi_studio --action=rec --method=4 --align_acpc=0 --check_btable=0 --dti_no_high_b=1 --source=/scratch/817461.1.member.q/qsiprep_1_sub-PR01/qsiprep_wf/single_subject_PR01_wf/dwi_preproc_ses_1021_acq_HARDI_wf/pre_hmc_wf/dwi_qc_wf/raw_gqi/sub-PR01_ses-1021_acq-HARDI_dwi_merged.src.gz --num_fiber=3 --thread_count=-2 --other_output=all --record_odf=1 --r2_weighted=0 --param0=1.2500 --thread_count=-2
Stdout:
	e[1;34mDSI Studio version: Chen"陳" Aug  1 2023e[0m
	│ DSI Studio version: Chen"陳"
	│ action=rec
	│ source=/scratch/817461.1.member.q/qsiprep_1_sub-PR01/qsiprep_wf/single_subject_PR01_wf/dwi_preproc_ses_1021_acq_HARDI_wf/pre_hmc_wf/dwi_qc_wf/raw_gqi/sub-PR01_ses-1021_acq-HARDI_dwi_merged.src.gz
	│ loop=/scratch/817461.1.member.q/qsiprep_1_sub-PR01/qsiprep_wf/single_subject_PR01_wf/dwi_preproc_ses_1021_acq_HARDI_wf/pre_hmc_wf/dwi_qc_wf/raw_gqi/sub-PR01_ses-1021_acq-HARDI_dwi_merged.src.gz
	├─e[1;34mrun rece[0m
	│ ├─e[1;34mopen SRC file sub-PR01_ses-1021_acq-HARDI_dwi_merged.src.gze[0m
	│ │ │ prepare index file for future accelerated loading
	│ │ │ saving index file for accelerated loading: sub-PR01_ses-1021_acq-HARDI_dwi_merged.src.gz.idx
	│ │ └─5.298 s
	│ ├─e[1;34mreconstruction parameters:e[0m
	│ │ │ method=4
	│ │ │ odf_resolving=0
	│ │ │ record_odf=1
	│ │ │ dti_no_high_b=1
	│ │ │ check_btable=0
	│ │ │ other_output=all
	│ │ │ r2_weighted=0
	│ │ │ thread_count=-2
	│ │ │ param0=1.2500
	│ │ │ param1=3000
	│ │ │ param2=0.05
	│ │ │ template 0:"ICBM152_adult.QA.nii"
	│ │ │ template 1:"C57BL6_mouse.QA.nii"
	│ │ │ template 2:"dHCP_neonate.QA.nii"
	│ │ │ template 3:"INDI_rhesus.QA.nii"
	│ │ │ template 4:"Pitt_marmoset.QA.nii"
	│ │ │ template 5:"WHS_SD_rat.QA.nii"
	│ │ │ template=0
	│ │ └─0 ms
	│ ├─e[1;34mspecify maske[0m
	│ │ │ mask=1
	│ │ └─2 ms
	│ ├─e[1;34mpreprocessinge[0m
	│ │ │ preprocessing=0
	│ │ │ motion_correction=0
	│ │ └─0 ms
	│ ├─e[1;34madditional processing stepse[0m
	│ │ │ align_acpc=0
	│ │ └─0 ms
	│ ├─e[1;34minitializinge[0m
	│ │ └─6 ms
	│ │ e[1;31mERROR:std::bad_alloce[0m
	│ └─5.321 s
	└─5.326 s
	e[1;31mWarning: --num_fiber is not used/recognized. Did you mean --template ?e[0m
	e[1;31mWarning: --thread_count is not used/recognized. Did you mean --other_image ?e[0m
Stderr:

Traceback:
	Traceback (most recent call last):
	  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 400, in run
	    outputs = self.aggregate_outputs(runtime)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 429, in aggregate_outputs
	    predicted_outputs = self._list_outputs()  # Predictions from _list_outputs
	  File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/dsi_studio.py", line 291, in _list_outputs
	    assert len(results) == 1
	AssertionError



The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/miniconda/bin/qsiprep", line 8, in <module>
    sys.exit(main())
  File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/cli/run.py", line 677, in main
    qsiprep_wf.run(**plugin_settings)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/base.py", line 224, in run
    raise error from cause
RuntimeError: 3 raised. Re-raising first.

Screenshots / relevant information:

I wish I could provide more context, but I’m not really sure what this error means. I’ve had this a couple times in the past and I usually just re-run and it works, but this time it’s pretty persistent.


Hi @suneelbanerjee, what is a typical value for NTHREADS? It’s possible that you’re running out of memory because each thread will allocate memory. I typically have run this with nthreads=8 and omp-nthreads=8 to keep memory usage low (< 32GB).