Hi all,
I´m having some trouble running fmriprep-20.1.1. on an HPC cluster using singularity(3.5.2) as it crashes after ~25 min on autorecon. I´m currently processing only 1 subject for testing purposes:
Crash Message:
Node: fmriprep_wf.single_subject_aam063_wf.anat_preproc_wf.surface_recon_wf.autorecon1
Working directory: /working_dir/fmriprep_wf/single_subject_aam063_wf/anat_preproc_wf/surface_recon_wf/autorecon1
Node inputs:
FLAIR_file = <undefined>
T1_files = <undefined>
T2_file = <undefined>
args = <undefined>
big_ventricles = <undefined>
brainstem = <undefined>
directive = autorecon1
environ = {}
expert = <undefined>
flags = <undefined>
hemi = <undefined>
hippocampal_subfields_T1 = <undefined>
hippocampal_subfields_T2 = <undefined>
hires = <undefined>
mprage = <undefined>
mri_aparc2aseg = <undefined>
mri_ca_label = <undefined>
mri_ca_normalize = <undefined>
mri_ca_register = <undefined>
mri_edit_wm_with_aseg = <undefined>
mri_em_register = <undefined>
mri_fill = <undefined>
mri_mask = <undefined>
mri_normalize = <undefined>
mri_pretess = <undefined>
mri_remove_neck = <undefined>
mri_segment = <undefined>
mri_segstats = <undefined>
mri_tessellate = <undefined>
mri_watershed = <undefined>
mris_anatomical_stats = <undefined>
mris_ca_label = <undefined>
mris_fix_topology = <undefined>
mris_inflate = <undefined>
mris_make_surfaces = <undefined>
mris_register = <undefined>
mris_smooth = <undefined>
mris_sphere = <undefined>
mris_surf2vol = <undefined>
mrisp_paint = <undefined>
openmp = 8
parallel = <undefined>
steps = <undefined>
subject_id = recon_all
subjects_dir = <undefined>
talairach = <undefined>
use_FLAIR = <undefined>
use_T2 = <undefined>
xopts = <undefined>
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 397, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 792, in _run_interface
self.raise_exception(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 723, in raise_exception
).format(**runtime.dictcopy())
RuntimeError: Command:
recon-all -autorecon1 -i /data/sub-aam063/anat/sub-aam063_T1w.nii -noskullstrip -openmp 8 -subjid sub-aam063 -sd /output/freesurfer
Standard output:
Standard error:
/home/fmriprep/fmriprep_wf/single_subject_aam063_wf/anat_preproc_wf/surface_recon_wf/autorecon1: No such file or directory.
Return code: 1
The additional slurm_out file using fmripreps -vv flag:
slurm-381709_0.txt (239.8 KB)
And my #SBATCH header I used in my .sh script:
#!/bin/bash
#
#SBATCH --array 0
#SBATCH --partition=test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=80
#SBATCH --mem-per-cpu=1024 #1GB
#SBATCH --time=02:00:00
#SBATCH --no-requeue
#SBATCH --mail-type=ALL
# ------------------------------------------
I have no problems when running fmriprep on a different server; over there I usually use the mem and threads flags …
Any ideas what might cause the problems? maybe @oesteban or @effigies ?
Thank you very much in advance,
Best,
Dominik