Fmriprep on HPC: CPU and memory allocations

Hey there, NeuroStars.

I’m a little bit confused by the relationship between the memory/CPU parameters passed to SLURM via #SBATCH flags and those passed via fmriprep flags in the example batch script for running fmriprep via singularity container (https://fmriprep.readthedocs.io/en/stable/singularity.html).

The #SBATCH flags indicate 1 task, 16 cpus, and 4G of RAM per CPU. However, the call to fmriprep uses --omp-nthreads 8 --nthreads 12 --mem_mb 30000.

Why is the number of omp-nthreads less than nthreads, and why are both values less than the number of CPUs allocated via #SBATCH flags?

Does --mem_mb apply only to virtual memory? Why is it less than half the total memory allocated to the job?

Thanks very much!

~Will

I suspect this is an artifact of editing the example in-place and not testing it. Feel free to use sensible values.

--nthreads should be equal to the number of CPUs allocated. --omp-nthreads should generally not be set at all unless you have a very good reason.

--mem_mb generally applies to actually allocated memory, however the calculations can be significantly off, depending on the cluster’s overcommit policy. If you’re getting crashes, setting the fMRIPrep limit lower than the SLURM is a good idea. I don’t have an amount or ratio to suggest at this point.

Thanks much! I was using --omp-nthreads to leverage my HPC’s parallel capabilities, based on the discussion at How much RAM/CPUs is reasonable to run pipelines like fmriprep?.

I’ll try it without --omp-nthreads and report back.