Memory usage of fmriprep

Dear all

I’m using fmriprep on a cloud system with a specific script to run subjects in parallel mode.
I’ve observed that some subjects analyses do not finish completely when allocating 32GB of RAM.
However, when allocating 64GB for each subject, all subjects process smoothly.

Is there any possibility to predict the fmriprep memory usage for each subject ?

thank you very much for your help

best, michael

It’s a tricky thing to predict since it depends on many factors. However, your numbers are unusually high - I never had to assign more than 30GB per subject. Are you using the latest version? Is it high spatial and or temporal resolution data (functional and or anatomical)?

This thread seems related How much RAM/CPUs is reasonable to run pipelines like fmriprep?

1 Like

As @ChrisGorgolewski says, it’s difficult to predict. We try to estimate within the program, to reduce the chance of hitting memory limits unnecessarily.

While I can’t tell you how much memory you will use, I can attempt to address your situation. The most common cause I’ve seen for running out of memory is a large BOLD series (>700 or so TRs). If this is your situation and you aren’t constrained for disk space in your working directory, consider using the --low-mem flag (even though it seems you can request plenty of memory). This will wait until the end of the pipeline to compress the resampled BOLD series, which allows tasks that need to read these files to read only the necessary parts of the file into memory.

2 Likes

Many thanksè I will try this.

best, mike