MRIQC never runs longer than 25s (IndexError: list index out of range)

Summary of what happened:

Received an ‘IndexError: list index out of range’ very early on whenever attempting to run MRIQC.
I am able to run fmriprep just fine.

Command used (and if a helper script was used, a link to the helper script or the command generated):

singularity run --cleanenv -B /data/ncl-mb10:/data/ncl-mb10,/data/ncl-mb13:/data/ncl-mb13,/usr/bin:/usr/bin $sifloc --participant-label $subj --n_procs 4 --omp-nthreads 4 --mem_gb 16 --no-sub --ica --fft-spikes-detector -w $workdir $indir $outdir participant

Version:

MRIQC v23.0.0

Environment (Docker, Singularity, custom installation):

Singularity

Data formatted according to a validatable standard? Please provide the output of the validator:

Yes, is BIDS-formatted. fmriprep works on dataset with no issues.

Relevant log outputs (up to 20 lines):

Process Process-2:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/opt/conda/lib/python3.9/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/conda/lib/python3.9/site-packages/mriqc/cli/workflow.py", line 61, in build_workflow
    retval["workflow"] = init_mriqc_wf()
  File "/opt/conda/lib/python3.9/site-packages/mriqc/workflows/core.py", line 44, in init_mriqc_wf
    workflow.add_nodes([fmri_qc_workflow()])
  File "/opt/conda/lib/python3.9/site-packages/mriqc/workflows/functional.py", line 130, in fmri_qc_workflow
    iqmswf = compute_iqms()
  File "/opt/conda/lib/python3.9/site-packages/mriqc/workflows/functional.py", line 322, in compute_iqms
    fwhm_interface = get_fwhmx()
  File "/opt/conda/lib/python3.9/site-packages/mriqc/workflows/utils.py", line 178, in get_fwhmx
    afni_version = Info.version()
  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 1098, in version
    klass._version = klass.parse_version(raw_info)
  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/afni/base.py", line 36, in parse_version
    version_stamp = raw_info.split("\n")[0].split("Version ")[1]
IndexError: list index out of range

Screenshots / relevant information:

This may be an error with being unable to find/get the proper raw_info.

This is a known Bug (IndexError: list index out of range (MRIQC never runs longer than 25s) · Issue #1087 · nipreps/mriqc · GitHub). Please try version 22.0.6

Yes, that bug was noted by me. I gave 22.0.6 a try and I got the same error. Is there something else to try/be concerned of?

What is the output from the bids validator?

I was having trouble using the bids validator. Here is a screenshot of the details of my example dataset. It worked fine for fmriprep.

Unfortunately that is not enough to gauge whether the data are bids valid. How are you trying to run it?

I’m a bit new to this so it would be great if you could explain what it means to be bids valid beyond the formatting of directory structure and file titles. I am running this through a sif and on slurm.

There’s a lot of metadata, such as those in the JSON files, that are part of the bids standard, that we cannot see with just the overall organization of the files. You can read about the specifics here Brain Imaging Data Structure v1.8.0.

You can use the validator in your MRIQC or fmriprep image (I think the command is called bids-validator, may have to confirm later).

singularity exec -e -B $BIDS $IMG bids-validator $BIDS

Where $BIDS is your bids directory and $IMG is your software image.

Best,
Steven

This seems likely to be the cause:

Patching /usr/bin into a container seems guaranteed to cause problems, both missing commands expected to be present in the container, and dynamic links failing to resolve to the correct /usr/lib.

2 Likes

Thanks, that did help. I am running into another crash. I will post the crash log here, but let me know if I should make another post. For this particular subject and simplicity, I am testing with only a T1 and resting-state scan.

What is your new command?

The same without /usr/bin:
singularity run --cleanenv -B /data/ncl-mb10:/data/ncl-mb10,/data/ncl-mb13:/data/ncl-mb13 $sifloc --participant-label $subj --n_procs 4 --omp-nthreads 4 --mem_gb 16 --no-sub --ica --fft-spikes-detector -vvv -w $workdir $indir $outdir participant

I will mention that I ran the bids-validator and got these errors. It is a little strange since I followed BIDS specification 1.8.0 for naming convention (The Brain Imaging Data Structure (BIDS) Specification | Zenodo):

Hi @Hannah.Choi,

epi is not a BIDS valid suffix for functional data. Those look like fieldmaps, which should go into a fmap folder. Additionally, you need to add TaskName: rest to your BOLD json files. If you can be certain that acquisition parameters (e.g., TR, slice timing, etc) are consistent across all the scans, you can actually have a task-rest_bold.json file in the BIDS root directory that will apply to all task-rest scans.

Additionally, you can simplify your command a bit in your bind strings. If you are not renaming your mounted paths, you do not need the “:/data/XXXX” part when you bind it. Also, since everything you need is in /data, you can just bind that (all the subfolders will be included). So your command can be

singularity run -e -B /data $sifloc --participant-label $subj --n_procs 4 --omp-nthreads 4 --mem_gb 16 --no-sub --ica --fft-spikes-detector -vvv -w $workdir $indir $outdir participant

Without seeing your entire script, I assume that $workdir $indir $outdir are all contained in /data.

Best,
Steven

Thank you so much for your help! I was able to get the example dataset to be BIDS valid and simplify my command. Thanks for the helpful tips and explanations.

I was hoping to keep everything within one directory ($workdir $indir $outdir) but I am getting this new error: mriqc: error: The selected working directory is a subdirectory of the input BIDS folder. Please modify the output path.

Is it not good practice or mriqc-valid to have the in, work, and out directories under the same project directory?

Typically the outdir is in $BIDSROOT/derivatives. But workdir should be somewhere outside of the BIDS directory.

Thanks.
I am now running into this error:

Traceback (most recent call last):
  File "/opt/conda/bin/mriqc", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.9/site-packages/mriqc/cli/run.py", line 167, in main
    mriqc_wf.run(**_plugin)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 184, in run
    self._clean_queue(jobid, graph, result=result)
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 256, in _clean_queue
    raise RuntimeError("".join(result["traceback"]))
RuntimeError: Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 60, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node gcor.

Cmdline:
	@compute_gcor -mask /data/ncl-mb13/mriqc_work/mriqc_wf/funcMRIQC/ComputeIQMs/_in_file_..data..ncl-mb13..SPIN_TEST..sub-SPIN007..func..sub-SPIN007_task-rest_bold.nii.gz/gcor/sub-SPIN007_task-rest_bold_valid_volreg_tstat_mask.nii.gz -input /data/ncl-mb13/mriqc_work/mriqc_wf/funcMRIQC/ComputeIQMs/_in_file_..data..ncl-mb13..SPIN_TEST..sub-SPIN007..func..sub-SPIN007_task-rest_bold.nii.gz/gcor/sub-SPIN007_task-rest_bold_valid_volreg.nii.gz
Stdout:
	** failed to get view of -input /data/ncl-mb13/mriqc_work/mriqc_wf/funcMRIQC/ComputeIQMs/_in_file_..data..ncl-mb13..SPIN_TEST..sub-SPIN007..func..sub-SPIN007_task-rest_bold.nii.gz/gcor/sub-SPIN007_task-rest_bold_valid_volreg.nii.gz, check command
Stderr:
	/bin/netstat: Command not found.
	3dinfo: Command not found.
Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/lib/python3.9/site-packages/mriqc/interfaces/transitional.py", line 89, in _run_interface
	    gcor_line = [
	IndexError: list index out of range

These seem to be the relevant bits. Does the file /data/ncl-mb13/mriqc_work/mriqc_wf/funcMRIQC/ComputeIQMs/*in_file*...data...ncl-mb13...SPIN_TEST...sub-SPIN007...func...sub-SPIN007_task-rest_bold.nii.gz/gcor/sub-SPIN007_task-rest_bold_valid_volreg.nii.gz exist?

I suspect it’s not a problem for netstat not to exist, but if @compute_gcor can’t find 3dinfo, that could lead to the problem of not finding a view… If you singularity shell into the container, can you run 3dinfo?

Yes, the file exists.

Yes, if I singularity shell into the container, I can run 3dinfo.

I also updated my command to this per Steven:
singularity run -e -B /data $sifloc --participant-label $subj --n_procs 4 --omp-nthreads 4 --mem_gb 16 --no-sub --ica --fft-spikes-detector -vvv -w $workdir $indir $outdir participant

$workdir $indir $outdir are all contained in /data.

And what happens when you run the command line that it says failed?

May you also please let us know how you collected the container?