Fmriprep Expected memory allocation

Summary of what happened:

I’m preprocessing fmri data using fmriprep. I have two tasks with the following acquisition parameters:
Task 1:

  • Runs: 8
  • Duration of scan: 188 volumes
  • TR: 1.87s
  • Voxel resolution: 1.7 mm isotropic (122 x 122 x 63 voxels)

Task 2:

  • Runs: 1
  • Duration of scan: 353 volumes
  • TR: 1.87s
  • Voxel resolution: 1.7 mm isotropic (122 x 122 x 63 voxels)

I am running fmriprep on a high performance computing cluster, and allocating 40G of memory, with 8 threads. If I lower the memory allocation below 35G, the job is terminated with an out of memory error.

With 40G, the job runs all the way through for most participants, but I always get this warning:
240822-10:36:56,931 nipype.workflow WARNING: Some nodes exceed the total amount of memory available (40.96GB).
However, the job fails for a handful of participants with a variety of different errors, and the cluster admin suggested I look into this warning.

Is it expected that this job would require 40G of memory? Is there a way to decrease the size of the job?

Command used (and if a helper script was used, a link to the helper script or the command generated):

Relevant portions of the script running fmriprep:

export OMP_NUM_THREADS=8
echo "****OMP_NUM_THREADS is $OMP_NUM_THREADS****"

sid=sub-${SLURM_ARRAY_TASK_ID}
echo $sid

fmriprep \
    "$input_dir" \
    "$output_dir" \
    participant \
    --work-dir "$working_dir" \
    --fs-license-file "$freesurfer_dir"/license.txt \
    --participant-label "$sid" \
    --dummy-scans 0 \
    --output-spaces anat \
    --use-syn-sdc \
    --ignore fieldmaps \
    --force-syn \
    --mem-mb $SLURM_MEM_PER_NODE \
    --n-cpus $SLURM_CPUS_PER_TASK

SLURM submission script:

#!/bin/bash
#SBATCH --job-name=RunfMRIPrep
#SBATCH --mem=40g
#SBATCH --account=def-rolsen
#SBATCH --time=4-0:00:00
#SBATCH --cpus-per-task=8
#SBATCH --array=1014,1015,1017

srun ImagingScripts/Runfmriprep.sh

Version:

fmriprep version 23.0.2

Environment (Docker, Singularity / Apptainer, custom installation):

Preprocessing is run on a high-performance computing cluster by loading the apptainer and fmriprep modules.

Data formatted according to a validatable standard? Please provide the output of the validator:

bids-validator@1.8.0

	1: [WARN] The recommended file /README is missing. See Section 03 (Modality agnostic files) of the BIDS specification. (code: 101 - README_FILE_MISSING)

	Please visit https://neurostars.org/search?q=README_FILE_MISSING for existing conversations about this issue.


        Summary:                  Available Tasks:        Available Modalities: 
        677 Files, 54.83GB        AIM                     MRI                   
        31 - Subjects             local                                         
        1 - Session                                                             

Hi @mrinmayik,

You can use the —low-mem flag, reduce number of threads, process runs piecewise using bids filter files, or some combination thereof.

Best,
Steven