Hello all,
I am trying to run fmriprep on a dataset using a script that has previously run successfully, and I am getting a crash on the Freesurfer recon-all command. The autorecon1 node is exiting with the error: “must specify a subject id”, but I am passing the subject ID to the fmriprep job, so I’m not sure how to align those inputs.
The issue seems similar to the one described here, but I am already using the --cleanenv flag in my singularity/apptainer call.
Any advice is very welcome.
Command used (and if a helper script was used, a link to the helper script or the command generated):
250916-13:05:39,804 nipype.workflow CRITICAL:
fMRIPrep failed: Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
return self._run_command(execute)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.
Cmdline:
recon-all -autorecon1 -i /tmp/fmriprep_23_2_wf/sub_DJO_wf/anat_fit_wf/anat_template_wf/denoise/mapflow/_denoise0/sub-DJO_ses-01_T1w_noise_corrected.nii.gz -noskullstrip -noT2pial -noFLAIRpial -openmp 14 -subjid sub-DJO -sd /data/output/sourcedata/freesurfer
Stdout:
ERROR: must specify a subject id
Stderr:
mktemp: failed to create file via template ‘/scratch/tmp.XXXXXXXXXX’: No such file or directory
mktemp: failed to create file via template ‘/scratch/tmp.XXXXXXXXXX’: No such file or directory
Traceback (most recent call last):
File "/opt/freesurfer/python/scripts/rca-config2csh", line 20, in <module>
configfile = sys.argv[1]
IndexError: list index out of range
Traceback:
RuntimeError: subprocess exited with code 1.
250916-13:05:41,143 cli ERROR:
Preprocessing did not finish successfully. Errors occurred while processing data from participants: DJO (1). Check the HTML reports for details.
Can you add --writable-tmpfs to the singularity run preamble? Also, -e and --cleanenv are redundant together - only need one of them. I also don’t know how reliable the singularityenv tmpdir will be. Just specify one on your local file system.
Does the error persist on newer versions of fmriprep with a fresh working directory?
Hi Steven,
I added -writable-tmpfs and removed the redundant -e from the singularity call. I also switched the /tmp binding to the local tmp directory rather than the singularity/apptainer version, but I’m getting a new issue instead:
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/niworkflows/interfaces/nibabel.py", line 151, in _run_interface
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/niworkflows/interfaces/nibabel.py", line 738, in _dilate
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/skimage/morphology/__init__.py", line 20, in <module>
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/skimage/morphology/convex_hull.py", line 6, in <module>
ImportError: /opt/conda/envs/fmriprep/lib/python3.10/site-packages/skimage/morphology/_convex_hull.cpython-310-x86_64-linux-gnu.so: cannot open shared object file: Too many open files
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/bin/fmriprep", line 8, in <module>
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/cli/run.py", line 214, in main
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 879, in exec_module
File "<frozen importlib._bootstrap_external>", line 1016, in get_code
File "<frozen importlib._bootstrap_external>", line 1073, in get_data
OSError: [Errno 24] Too many open files: '/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/reports/__init__.py'
Is this related to the /tmp path?
Our LPC recently upgraded from Singularity to Apptainer, and depreciated the singularity pathways, so I am attempting to update my stock scripts to reflect that. Previously, we have had issues with the amount of available space in the /tmp directory which is the reason for the singularityenv tmpdir binding.
I will also try invoking a newer version of fmriprep in the meantime.
Thanks,
Jess
Apptainer still can use singularity commands (it simply aliases to apptainer) and takes all the same argument syntax, so you shouldn’t need to update scripts that much.
I don’t know where you defined $SINGULARITYENV_TMPDIR so I cannot comment on that. Were you adding SBATCH header items to increase the amount of /tmp storage? Also, rather then using /tmp which is storage only maintained on the compute node, why not try a more dedicated network-attached scratch space (I don’t know what that is for you, but any HPC should have it). That way you can keep your working dir available temporarily as needed (e.g., for resuming a paused / interrupted job, inspecting intermediate files for debugging), rather than having all the files die with the node when the job ends.