We just downloaded the new fmriprep_20.2.6 container via nipreps and are running into an error at the workflow step involving the template selection.
Below is an abbreviated output of the log file up to the first instance of the error:
Running fMRIPREP version 20.2.6:
* BIDS dataset path: /scratch/switt4/Undergrads_fMRI/TAR/bids.
* Participant list: ['001'].
* Run identifier: 20211108-142242_3b3ca22e-c13c-42e4-bab7-ddf7d5a42849.
* Output spaces: MNI152NLin2009cAsym:res-native.
* Pre-run FreeSurfer's SUBJECTS_DIR: /scratch/switt4/Undergrads_fMRI/TAR/fmriprep/fmriprep_20.2.6/sourcedata/freesurfer.
211108-14:23:21,309 nipype.workflow INFO:
No single-band-reference found for sub-001_task-movie_run-01_bold.nii.gz.
211108-14:23:23,127 nipype.workflow IMPORTANT:
BOLD series will be slice-timing corrected to an offset of 0.96s.
211108-14:23:23,366 nipype.workflow INFO:
No single-band-reference found for sub-001_task-rest_run-01_bold.nii.gz.
211108-14:23:23,780 nipype.workflow IMPORTANT:
BOLD series will be slice-timing corrected to an offset of 1.06s.
211108-14:23:29,443 nipype.workflow INFO:
fMRIPrep workflow graph with 576 nodes built successfully.
211108-14:23:46,999 nipype.workflow IMPORTANT:
fMRIPrep started!
211108-14:23:48,179 nipype.workflow WARNING:
[Node] Error on "fmriprep_wf.single_subject_001_wf.func_preproc_task_rest_run_01_wf.bold_std_trans_wf.select_tpl" (/localscratch/switt4.54165523.0/fmriprep_wf/single_subje>
211108-14:23:48,180 nipype.workflow ERROR:
Node select_tpl.a0 failed to run on host gra1237.
211108-14:23:48,203 nipype.workflow ERROR:
Saving crash info to /scratch/switt4/Undergrads_fMRI/TAR/fmriprep/fmriprep_20.2.6/sub-001/log/20211108-142242_3b3ca22e-c13c-42e4-bab7-ddf7d5a42849/crash-20211108-142348-sw>
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/legacymultiproc.py", line 432, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 521, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 639, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 715, in _run_command
result = self._interface.run(cwd=outdir, ignore_exception=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 401, in run
outputs = self.aggregate_outputs(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/support.py", line 90, in __exit__
for k, v in self._resmon.stop():
ValueError: too many values to unpack (expected 2)
This error does not occur when I run the same subject on the same HPC setup for fmriprep_20.2.5.
Has something changed with how fmriprep and templateflow interact?
Node: fmriprep_wf.fsdir_run_20220218_175403_305f94ca_22b3_4da7_8001_1947608913a2
Working directory: /work/fmriprep_wf/fsdir_run_20220218_175403_305f94ca_22b3_4da7_8001_1947608913a2
Node inputs:
derivatives = /output
freesurfer_home = /opt/freesurfer
overwrite_fsaverage = False
spaces = [‘fsnative’, ‘fsaverage5’]
subjects_dir = /fsdir
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 344, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 521, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 639, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 715, in _run_command
result = self._interface.run(cwd=outdir, ignore_exception=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py”, line 401, in run
outputs = self.aggregate_outputs(runtime)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/support.py”, line 90, in exit
for k, v in self._resmon.stop():
ValueError: too many values to unpack (expected 2)
When creating this crashfile, the results file corresponding
to the node could not be found.
Also subject level crash file in: output/fmriprep/sub-1000635/log/20220218-171734_b50810a9-6b74-4e4e-ac09-3a4bc872de79/crash-20220218-172025-nikhil-spacesource.aI.a0-aa1f1713-c86b-49e4-84d8-2612e3e16d90.txt
Node: fmriprep_wf.single_subject_1000635_wf.anat_preproc_wf.anat_derivatives_wf.spacesource
Working directory: /work/fmriprep_wf/single_subject_1000635_wf/anat_preproc_wf/anat_derivatives_wf/_in_tuple_MNI152NLin2009cAsym.res2/spacesource
Node inputs:
in_tuple = (‘MNI152NLin2009cAsym’, {‘res’: ‘2’})
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 344, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 521, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 639, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 715, in _run_command
result = self._interface.run(cwd=outdir, ignore_exception=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py”, line 401, in run
outputs = self.aggregate_outputs(runtime)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/support.py”, line 90, in exit
for k, v in self._resmon.stop():
ValueError: too many values to unpack (expected 2)
When creating this crashfile, the results file corresponding
to the node could not be found.
Thanks for the quick solution! I guess I didn’t search the fmripreps issues properly
Anyway, it does seem to work when I remove the --resource-monitor option. So it should be good for now, but I do need to monitor resources in future work so I will open an issue on GH.