QSIprep Reconstruction - want to make sure output files are correctly reconstructed

Summary of what happened:

Outputs from QSIprep derived, no .html file output, want to make sure that the reconstructed files are fully processed. When viewed using fsleyes, the output seems correct. However, there are no .html outputs and there are crashes in calc_connectivity, create_src, get_atlases, plot_peaks, tracking. I have posted the crash at plot_connectivity. Secondly, there is a consistent error that it is not finding the preproc file (which I have confirmed exists in the qsiprep preprocessing output).

Command used (and if a helper script was used, a link to the helper script or the command generated):

singularity run --cleanenv -B /mnt/cina/WMH:/WMH /mnt/cina/WMH/lib/qsiprep-0.16.1.simg /WMH/data/BIDS/ADNI/ /WMH/data/BIDS/ADNI/derivatives/qsiprep participant --participant_label sub-007S1222 --output-resolution 1.0 --recon-only --recon-spec amico_noddi --recon-only --recon-spec dsi_studio_gqi --recon-input /WMH/data/BIDS/ADNI/derivatives/qsiprep/qsiprep --fs_license_file /WMH/lib/fs_license.txt --skip-bids-validation -w /WMH/data/intermediates_test/qsiprep

Version:

<qsiprep-0.16.1.simg >

Environment (Docker, Singularity, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

Relevant log outputs (up to 20 lines):

Screenshots / relevant information:



Hi,

A few things:

  1. In the future, may you please copy and paste the terminal error outputs into your post? The screenshots are not as easy to read.

You specify two recon-specs, which I am not sure is allowed. If you want to do both of these workflows, you can either do two QSIRecon commands or combine the jsons into one file, and pass the resulting JSON into the --recon-spec argument. (You can find the pipeline JSONs here:qsiprep/qsiprep/data/pipelines at master · PennLINC/qsiprep · GitHub)
3) Is this error subject specific or for everyone? I see some subjects have .htmls and others do not.

Yes, I will post the terminal error outputs going forward.

For the reconstruction - I was able to specify two recon specs and it did not crash for 61 subjects. It did crash for the others.

I will try running one recon-spec at a time. However, why would it work for for some subjects and not for others?

Thanks in advance.

What does the error that relates to the not finding preprocessed files look like?

It is highlighted in white - sorry I don’t have the terminal output currently to paste it from there.

That it not indicating that the preprocessed files were not found. That is something in the work directory not being found. Does this error persist in a new directory, and are there errors that precede this?

Yes, below is the error copied from the terminal.

	 [Node] Error on "qsirecon_wf.sub-007S1222_amico_noddi.sub_007S1222_ses_004_run_01_space_T1w_desc_preproc_recon_wf.qsirecon_anat_wf.resample_mask" (/WMH/data/intermediates/qsiprep/qsirecon_wf/sub-007S1222_amico_noddi/sub_007S1222_ses_004_run_01_space_T1w_desc_preproc_recon_wf/qsirecon_anat_wf/resample_mask)
exception calling callback for <Future at 0x7f759c9eeb20 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 722, in _run_command
    result = self._interface.run(cwd=outdir, ignore_exception=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 388, in run
    self._check_mandatory_inputs()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 275, in _check_mandatory_inputs
    raise ValueError(msg)
ValueError: Resample requires a value for input 'in_file'. For a list of required inputs, see Resample.help()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 70, in run_node
    result["result"] = node.result
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 223, in result
    return _load_resultfile(
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/utils.py", line 291, in load_resultfile
    raise FileNotFoundError(results_file)
FileNotFoundError: /WMH/data/intermediates/qsiprep/qsirecon_wf/sub-007S1222_amico_noddi/sub_007S1222_ses_004_run_01_space_T1w_desc_preproc_recon_wf/qsirecon_anat_wf/resample_mask/result_resample_mask.pklz
"""
The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 437, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
FileNotFoundError: /WMH/data/intermediates/qsiprep/qsirecon_wf/sub-007S1222_amico_noddi/sub_007S1222_ses_004_run_01_space_T1w_desc_preproc_recon_wf/qsirecon_anat_wf/resample_mask/result_resample_mask.pklz
221209-13:10:54,578 nipype.workflow INFO:
	 [Node] Executing "resample_mask" <nipype.interfaces.afni.utils.Resample>
221209-13:10:54,737 nipype.workflow INFO:
	 [Node] Executing "odf_rois" <nipype.interfaces.ants.resampling.ApplyTransforms>
221209-13:10:54,738 nipype.workflow WARNING:
	 [Node] Error on "qsirecon_wf.sub-007S1222_amico_noddi.sub_007S1222_ses_004_run_01_space_T1w_desc_preproc_recon_wf.qsirecon_anat_wf.odf_rois" (/WMH/data/intermediates/qsiprep/qsirecon_wf/sub-007S1222_amico_noddi/sub_007S1222_ses_004_run_01_space_T1w_desc_preproc_recon_wf/qsirecon_anat_wf/odf_rois)
exception calling callback for <Future at 0x7f759c9a99a0 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 722, in _run_command
    result = self._interface.run(cwd=outdir, ignore_exception=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 388, in run
    self._check_mandatory_inputs()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 275, in _check_mandatory_inputs
    raise ValueError(msg)
ValueError: ApplyTransforms requires a value for input 'transforms'. For a list of required inputs, see ApplyTransforms.help()

How did you run QSIPrep?

singularity run --cleanenv -B /mnt/cina/WMH:/WMH /mnt/cina/WMH/lib/qsiprep-0.16.1.simg /WMH/data/BIDS/ADNI/ /WMH/data/BIDS/ADNI/derivatives/qsiprep participant --participant_label sub-002S4213--output-resolution 1.0 --fs_license_file /WMH/lib/fs_license.txt --freesurfer_input /WMH/data/BIDS/ADNI/derivatives/sourcedata/freesurfer --skip-bids-validation

Have you tried running preprocessing and reconstruction in the same step? I will also note that 1.0mm is pretty fine resolution, making your files larger and needing more computational power to process. What is the original resolution of your image? Does it warrant that level of resampling? At some point you get diminishing returns on resampling.

singularity run --cleanenv -B /mnt/cina/WMH:/WMH /mnt/cina/WMH/lib/qsiprep-0.16.1.simg /WMH/data/BIDS/ADNI/ /WMH/data/BIDS/ADNI/derivatives/qsiprep participant --participant_label sub-002S4213 --stop-on-first-crash --notrack --output-resolution 1.2 --fs_license_file /WMH/lib/fs_license.txt --freesurfer_input /WMH/data/BIDS/ADNI/derivatives/sourcedata/freesurfer --recon-spec dsi_studio_gqi --skip-bids-validation

Would I need to run them simultaneously? I have all subjects preprocessed using qsiprep’s preprocessing pipeline without error. So I don’t want to run it again.

Is there a drawback to not getting .html outputs? Will the subjects that are processed without an .html output have compromised outputs in your experience?

As long as you use the same working directory, most things should be skipped. Running these in the same command will avoid any potential errors of QSIRecon not finding the QSIPrep outputs.

This indicates that there were errors. And looking at the HTMLs are important for quality control.

Hi Steven,

I am finding that qsirecon errors out with multi-session subjects.

This is the error message I am getting (as above):

Node: qsirecon_wf.sub-007S4272_dsistudio_pipeline.sub_007S4272_ses_006_run_01_space_T1w_desc_preproc_recon_wf.qsirecon_anat_wf.get_atlases
Working directory: /WMH/data/intermediates/qsiprep/qsirecon_wf/sub-007S4272_dsistudio_pipeline/sub_007S4272_ses_006_run_01_space_T1w_desc_preproc_recon_wf/qsirecon_anat_wf/get_atlases

Node inputs:

atlas_names = ['schaefer100', 'schaefer200', 'schaefer400', 'brainnetome246', 'aicha384', 'gordon333', 'aal116']
forward_transform = <undefined>
reference_image = /WMH/data/BIDS/ADNI/derivatives/qsiprep/qsiprep/sub-007S4272/ses-006/dwi/sub-007S4272_ses-006_run-01_space-T1w_desc-preproc_dwi.nii.gz
space = T1w

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers
    self.procs[jobid].run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node get_atlases.

Traceback:
      Traceback (most recent call last):
        File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
          runtime = self._run_interface(runtime)
        File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/utils.py", line 51, in _run_interface
          raise Exception("No MNI to T1w transform found in anatomical directory")
      Exception: No MNI to T1w transform found in anatomical directory

And I have attached two screenshots - 1) indicating the successfully preprocessed multi-session subject (8 sessions) and 2) qsirecon successfully processed session 2 (with dwi) but is crashing on sessions 4, 6, and 8 (all of which have dwi).


Can qsirecon take subjects with multiple dwi sessions?

Presence of HTML doesn’t necessarily mean there were no errors. Are the contents of the QSIPrep and/or QSIRecon anat folder different in sessions that are crashing? Are you using the combined command I mentioned in my last answer with a fresh working directory?

Best,
Steven

Problem

I am encountering a very similar issue to the one reported here with the qsiprep reconstruction step. I’ve had some success and some failure with the same data and what appears to me to be the same environment: (same container, same OS, apptainer in both cases).

Jan 2, 2023: It succeeded and produced the HTML report which claimed success. See attached slurm-qsirecon-success.out.txt

April 8, 2023: The output in the dwi directory is produced (all the same 40 files are included as in January), but the HTML report is not produced and qsiprep reconstruction claims to have failed. See attached slurm-qsirecon-fail.out.txt

There are differences reported by diff in some of the files, I looked at one and it was a slight calculation difference, so I’m going to assume this is due to the probabilistic nature of the processing.

Environment

Version: qsiprep 0.16.1
OS: HPC CentOS Linux release 7.9.2009 (Core)
apptainer version: 1.1.5-1.el7

Input data:

Validated BIDS directory: CAM003 with session. Available here: https://osf.io/m2dz7

This is a 3-shell acquisition on a Siemens Skyra. A reverse-phase-encode image is provided. IntendedFor is specified.

COMMANDS

QSIPREP

This qsiprep command works flawlessly every time

singularity run --cleanenv --bind ${MRIS}/data:/data:ro \
--bind ${MRIS}/derivatives:/out \
--bind ${MRIS}/qsi_work:/work \
${SIF}/qsiprep.sif /data /out participant \
--participant_label ${Subject} \
--fs-license-file ${HOME}/license.txt \
--stop-on-first-crash \
--output-resolution 1.3 -w /work --n_cpus 16 -v -v

QSIPREP RECONSTRUCTION

It has succeeded and failed on separate runs, causing me confusion!

# Use --cleanenv to prevent problems with singularity grabbing variables, libraries etc.
# from your HPC directories. You want the environment to be defined
# by what is inside the container.
# We specify the name of the singularity container to run.
# We need to bind three directories:
# 1) the derivatives created by qsiprep
# 2) the main derivatives directory
# 3) the work directory
# The recon-input is the output of qsiprep
# The recon spec can vary, there are a dozen or so canned options described on the read-the-docs site
# Output resolution is 1.3 as per suggestions
# If you ran freesurfer when you did fmriprep, then you can now specify those freesurfer results to be used with qsirecon

singularity run --cleanenv --bind ${MRIS}/data:/data:ro \
--bind ${MRIS}/derivatives/qsiprep:/qsiprep-output:ro \
--bind ${MRIS}/derivatives:/out \
--bind ${MRIS}/qsi_work:/work \
${SIF}/qsiprep.sif /data /out participant \
--participant_label ${Subject} \
--recon-input /qsiprep-output \
--recon-spec mrtrix_multishell_msmt_noACT \
--output-resolution 1.3 -w /work -v -v \
--fs-license-file ${HOME}/license.txt \
--freesurfer_input /out/sourcedata/freesurfer

Output

  • The dwi directory contains all 40 output files in both the success and failure cases.
  • Some files do differ, but I assume this is because of the probabilistic nature of the processing
  • Both directories are ~14 GB
  • The failure does not generate the HTML report.
  • I have attached both SLURM logs

Any help or suggestions would be appreciated.

Thank you for your time,

Dianne Patterson, Ph.D.

slurm-qsirecon-fail.out.txt (476.4 KB)
slurm-qsirecon-success.out.txt (224.6 KB)

Hi @Dianne_Patterson,

  1. What version of QSIPrep are you using?

Given that you have the means for susceptibility distortion correction and have freesurfer outputs available, why are you using a _noACT option? It seems like your data would be best suited for the mrtrix_multishell_msmt_ACT-hsvs workflow.

Based on the errors in the message, looks like it is something with the ODF / peaks report image. This has been a common enough issue (particularly among HPC users) that there is now a --skip-odf-plots flag you can enable. Of note, the only difference would be you no longer get a picture of the ODF directions, which helps with manual quality assurance; the outputs you have got should be the same. You can simply load your ODF plot into something like mrview from MRtrix3 to do the same quality assurance after QSIPrep/Recon is done.

So, short answer is: try enabling the --skip-odf-reports flag, and as long as the final files are created in the workflow (in your case, the connectivity matrices), you should feel good that your workflow completed as intended!

Best,
Steven

Thanks so much for your reply!
Here’s the version info (buried in the first message): Version: qsiprep 0.16.1

I am trying now with the --skip-odf-reports flag and --recon-only
I chose NOACT because I am interested in grey matter parcellations and, if I understand correctly, apply ACT is going to remove the grey matter information from the tracking. However, my background is with FSL and not MRtrix3, so I may be missing something.

Not quite. The ACT improves termination/seeding of tractography streamlines by using a better-resolved gray matter / white matter interface, but the resulting connectivity matrix will still be linked to gray matter parcellations based on distances between streamline endpoints and gray matter parcels.

Best,
Steven

I appreciate the correction!

The revised script ran:

singularity run --cleanenv --bind ${MRIS}/data:/data:ro \
--bind ${MRIS}/derivatives2/qsiprep:/qsiprep-output:ro \
--bind ${MRIS}/derivatives2:/out \
--bind ${MRIS}/qsi_work3:/work \
${SIF}/qsiprep.sif /data /out participant \
--participant_label ${Subject} \
--recon-input /qsiprep-output \
--recon-spec mrtrix_multishell_msmt_ACT-hsvs \
--stop-on-first-crash \
--output-resolution 1.3 -w /work -v -v \
--n-cpus 16 --omp-nthreads 15 \
--skip-odf-reports --recon-only \
--fs-license-file ${HOME}/license.txt \
--freesurfer_input /out/fmriprep2301/sourcedata/freesurfer

I’m betting the --skip-odf-reports was crucial! Thanks so much!

1 Like