fMRIprep failing running anatomy/freesurfer possibly related to simulatenous processes accessing template flow?

Hi all,

Our group is trying to run fMRIprep throughout all our studies on tons of scans on singularity on an HPC system. For some reason, when we run more than one session for a participant (this is a longitudinal study) preprocessing fails during the anatomy/freesurfer portion and it seems to be related to accessing the templateflow.

Here is the error we see…

File “/opt/conda/envs/fmriprep/lib/python3.10/pathlib.py”, line 818, in relative_to
raise ValueError(“{!r} is not in the subpath of {!r}”
ValueError: ‘/home1/06950/jalmeida/.cache/templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_desc-brain_mask.nii.gz’ is not in the subpath of ‘/opt/templateflow’ OR one path is relative and the other is absolute.
When creating this crashfile, the results file corresponding
to the node could not be found.

Here are some guesses on why this might be happening?

  1. We are also using separate bid filter files for session 1 and session 2 so that each anatomy scan is processed separately and not averaged.

So for example, here is the code that would run for session 1.

/opt/conda/envs/fmriprep/bin/fmriprep /work/06950/jalmeida/ls6/project_nndcbipolar/ /work/06950/jalmeida/ls6/project_nndcbipolar/derivatives/fmriprep-v23.1.3/ participant --participant-label sub-ut0006 -w /scratch/06950/jalmeida/work/project_nndcbipolar/ --fs-license-file /work/06950/jalmeida/ls6/apps/fmriprep/license.txt -v --skip_bids_validation --bids-filter-file /work/06950/jalmeida/ls6/apps/fmriprep/ses-01_bf.json --stop-on-first-crash --mem-mb 190000 --nthreads 64  

Here is the bids filter file for session 01 and(would say session: 02 for the session 2 bids filter file)

    "t1w": {
        "datatype": "anat",
        "session": "01",
        "suffix": "T1w"
    },
    "t2w": {
        "datatype": "anat",
        "session": "01",
        "suffix": "T2w"
    },	
    "bold": {
        "datatype": "func",
        "session": "01",
        "suffix": "bold",
	"direction": "AP"
    }
}
  1. We have set the template flow to be in our scratch directory and in a place where everyone in our group has access so we don’t run into permission errors if someone restarts a subject that was run and failed by someone else. Not sure if I did this correctly? Is it possible that accessing template-flow simultaneously for different processes might cause this issue?

Here is our fMRIprep portion of the script that we set through sbatch…

> export OMP_NUM_THREADS=64

> export APPTAINERENV_TEMPLATEFLOW_HOME=/opt/templateflow  # Tell fMRIPrep the mount point

> unset PYTHONPATH; singularity run -e \
	--bind ${finalworkdir}/ \
	--bind /scratch/study/opt/templateflow:/opt/templateflow \
 	--bind /scratch/work/project_${studyname}/ \
 	  /work/06950/jalmeida/ls6/apps/fmriprep/fmriprep_23.1.3.sif \
 	  ${finalworkdir} ${finalworkdir}derivatives/fmriprep-v23.1.3/ participant \
	 --participant-label sub-${participantid} \
 	 -w /scratch/06950/jalmeida/work/project_${studyname}/ \
 	 --fs-license-file /work/06950/jalmeida/ls6/apps/fmriprep/license.txt \
 	 -v \
 	 --skip_bids_validation \
	 --bids-filter-file /work/ls6/apps/fmriprep/ses-${sessionid}_bf.json 
        --stop-on-first-crash \
 	 --mem-mb 190000 --nthreads ${OMP_NUM_THREADS}