fMRIPrep taking much longer than expected on local workstation

I’m running fmriprep for the first time and haven’t managed to get a full output yet - I would really appreciate any thoughts on these issues. I may have fundamentally misunderstood some things!

Here are the two commands I have tried, the latter since finding a possible solution for the amd64 problem on the M1 mac:

fmriprep-docker /Users/z/…/T2/ABM_test /Users/z/…/T2/testprep --fs-license-file $HOME/license.txt

docker run --platform linux/amd64 --rm -e DOCKER_VERSION_8395080871=20.10.17 -it -v /Users/z/license.txt:/opt/freesurfer/license.txt:ro -v /Users/z/…/T2/ABM_test:/data:ro -v /Users/z/…/T2/testprep:/out nipreps/fmriprep:22.0.2 /data /out participant

Here is the folder structure, which passes a BIDS validation and is intentionally minimal with one run of one task (although it may be less than the minimum files needed for fMRIPrep to run):
├── dataset_description.json
├── participants.json
├── participants.tsv
└── sub-01
├── anat
│ ├── sub-01_T1w.json
│ └── sub-01_T1w.nii.gz
└── func
├── sub-01_task-abm_bold.json
├── sub-01_task-abm_bold.nii.gz
└── sub-01_task-abm_events.tsv

The final message I get before it hangs indefinitely (<24hrs before I end it):

[MultiProc] Running 1 tasks, and 0 jobs ready. Free memory (GB): 1.99/6.99, Free processors: 1/6.
Currently running:

Is it possible that the machine I’m using is just not powerful enough? I get a warning saying “Some nodes exceed the total amount of memory available (8.75GB).”, which I took to mean it would take longer than usual, but perhaps it means it’s simply not possible. I have 16GB RAM and the full folder structure is 2.17GB. I may also have neglected to include some necessary things in the dataset, which is leading it to halt as the expected files aren’t there.

Very new to this so hopefully I have included the necessary things in this post, but please do let me know if not. Thanks in advance!

Hi Z,

Your dataset looks fine for fMRIPrep at least on a surface level (and if it passes validation I would presume that it is perfectly fine).

A few questions / suggestions to start:

  1. I would specify a specific scratch/working directory with the -w flag.
  2. I see you have 16 GB ram on your machine. How much are you devoting to Docker (e.g. in the desktop app settings or with the -m argument for docker run)? There should be similar ways to change CPU limits as well.
  3. The recon-all nodes should take a while (~8-12 hours total depending on resources and resolution of T1), so given that the resources may be limited to your system default, it is not surprising that is where it hangs. If you are curious what it would be like without recon-all, you can do a debug with --fs-no-reconall flag or run recon-all yourself with FreeSurfer and have fMRIPrep import those. I’ll add that --fs-no-reconall should only be for debugging purposes as FreeSurfer improves the quality of anatomical derivatives.


Thanks for your quick reply!

On point 2 - I had 10 GB ram and 6 CPUs with 1.5 GB of swap. It sounds like more ram would be beneficial, at minimum?

I will set a scratch directory and try a debugging run without recon-all to see how that runs, many thanks for the suggestions.



Yes more RAM may be beneficial. You can try to pass also the option --low-mem?

Thanks, I’ll give that a try! Even with 10GB of RAM it still hung at the same point for over 14 hours. Probably worth looking into learning how to use our HPC cluster and using Singularity instead of Docker - I didn’t anticipate that it would be this difficult for my machine to run, but will see how --low-mem helps it out.

This combination of --low-mem and more RAM worked until it errored out for other reasons - thank you both!

1 Like

Glad to hear it! I can also recommend which is a free service that allows you to run preprocessing pipelines on cloud-based servers, taking the load off of your personal machine.

1 Like