fMRIprep 21.0.1 failing due to various freesurfer-related issues?

We are using fmriprep version 21.0.1 obtained from docker hub in a singularity container (and local templateflow) with the following call:

proj=`cat ../PATHS.txt`
if [ ! -d $scratch ]; then
mkdir -p $scratch
module add openmind/singularity
module add openmind/freesurfer/6.0.0
subject=$(echo ${subjs[${SLURM_ARRAY_TASK_ID}]})

export TEMPLATEFLOW_HOME=$templateflow

# run fmriprep for current subject

cmd="singularity exec --cleanenv -B /om3:/om3 -B /cm:/cm $proj/singularity_images/fmriprep_templateflow.sif fmriprep $bids_dir $out_dir participant --participant_label $subject --mem_mb 15000 --ignore slicetiming --use-aroma -w $scratch --fs-license-file /cm/shared/openmind/freesurfer/6.0.0/.license --output-spaces MNI152NLin6Asym:res-2"

# execute command

We alternatively only change this command to remove the --use-aroma flag.

We have had several issues with running fMRIprep, including extreme slowness (hence our attempt to only use local templateflow to avoid network lags on our computing cluster). however, now we are having issues with the reconstruction and random failing of fmriprep due to freesurfer-related problems. For example,

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/", line 344, in _send_procs_to_workers
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/", line 524, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/", line 642, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/", line 750, in _run_command
    raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node fsdir_run_20221103_121748_1eea69f9_2af0_4d85_ac5c_60c8f1bd34b6.

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/", line 398, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/lib/python3.9/site-packages/niworkflows/interfaces/", line 923, in _run_interface
  File "/opt/conda/lib/python3.9/", line 732, in rmtree
    _rmtree_safe_fd(fd, path, onerror)
  File "/opt/conda/lib/python3.9/", line 671, in _rmtree_safe_fd
    onerror(os.rmdir, fullname, sys.exc_info())
  File "/opt/conda/lib/python3.9/", line 669, in _rmtree_safe_fd
    os.rmdir(, dir_fd=topfd)
OSError: [Errno 39] Directory not empty: 'label'

We seem to have multilple distinct issues specifically with freesurfer reconstructions steps which arise separately and sometimes not in any predictable fashion. Sometimes rerunning helps, sometimes deleting all derivatives files and replacing the working_dir is necessary. Sometimes it fails multiple times in a row. Sometimes instead of the error that the label dir is not empty (meaning, the derivatives/sourcedata/freesurfer/ fsaverage label folder), but rather only one specific label file is missing from the fsaverage surface label files, even though all label files ARE present in the fmriprep copy of freesurfer located in the singularity container.

Sorry I don’t ave better examples, people in our lab ave been testing and retesting fMRIprep many times to try and get data pushed through and it’s been hard to track the errors. I am just wondering if there is anything obvious about our scratch or tmp dirs or anything like that which could cause various freesurfer related problems. I know we also load the freesurfer module but all fmriprep calls clearly refer to the freesurfer version within the singularity container.

There ave been various errors that we were not previously experiencing wit an outdated version of fmriprep (major version 1), so I’m not sure what we are missing now with this new version. Any help would be appreciated, and I’m fully available to run tests or answer any clarifying questions. Not even sure what the right questions are, so please help us! :slight_smile:


Just putting a response here to indicate I am following up directly with the poster (we use the same computing cluster).