Memory issues running fMRIPrep on the Midnight Scan Club dataset

Update: Unfortunately it crashed on half the subjects, while the remaining subjects ran successfully. The log file I get for MSC01 can be found here: https://gist.github.com/danjgale/8fde5959bcd81e7d4727ec97e7e1edf3

This looks to be a mirror of this post. Based on where that post left off, I tried running it again with --mem-mb 100000 with no success. Thoughts?

mghRead(/out/freesurfer/sub-MSC01/mri/T2.prenorm.mgz): could not read 262144 bytes at slice 172

Things similar to this error come up in the freesurfer mailing list, most the advice I could find boils down to get more ram and make sure you have enough space with no real resolutions other than that (that I saw). The freessurfer site says 4gb of ram for the server and 4gb for each subject, but that has been their recommendation since at least may 2013. MSC is only 10 subjects but with files that are larger than was common back then, so their recommendation might be out of date.

Sentry is reporting that for MSC-01 recon all that there was 261 Mb of free memory when the node started execution.

I’d hope 100Gb would of been enough to run this. If you rerun this on a single subject and it works that would definitely rule out any file integrity issues.

Whats the total memory limit of the server that you are running this on?

The use of --mem-mb is probably confusing. It is just a way of making fmriprep aware of potential memory limits it should observe. It does not mean that it will play nicely everytime and be always below the amount indicated.

That means:

  • fmriprep should run just fine without indicating --mem-mb if you have allocated enough RAM for it to run. Please make sure that your container has access to enough RAM.
  • increasing the value of --mem-mb will make the problem worse. Instead, pass in a smaller value (I usually run fmriprep with --mem-mb 30000 although I allocate the node’s memory in full - which is 64GB).

@rwblair I just finished running one subject and it still raised an error. The total memory limit of the server I am on is 512GB.

@oesteban When I ran the single subject I set --mem-mb 30000. How would I go about ensuring that the container has enough RAM on a linux system?

Is that exactly the same error?

The --mem-mb 30000 is telling Nipype not to parallelize so many tasks as to use over 30GB of RAM (very roughly estimated, BTW).

512 GB is a lot. It does not make sense that fMRIPrep is hitting that limit at all. What is the exact command line you are trying?

Okay, that makes more sense. Thanks for the clarification.

The command I am running is:

fmriprep-docker /Raid6/raw/midnight_scan_club /Raid6/users/dan/Documents/Projects/midnight/data/ participant --fs-license-file ~/Documents/licenses/freesurfer/license.txt -w /Raid6/users/dan/Documents/Projects/midnight/fmriprep_working_dir/ --participant-label MSC01 --mem-mb 30000

Okay, so you are using Docker. Have you double-checked the memory limitation settings? In Windows and Mac they can be pretty low by default.

Finally, it is unclear to me that the error from Freesurfer is necessarily derived from memory limits. Could you be hitting some kind of disk quota, particularly of the output folder?

This is on an Ubuntu 14.04 system, so there should be no memory limitations (i.e. when I type docker info I get WARNING: No swap limit support). There are no disk quotas set for my user or output folder, and there’s still plenty of space left on the machine. I’m running this subject again to a different output folder just to see if that does anything. It’s odd, and I’ll keep plugging away at it

I realize that this is a duplicate of Could not read error : while file ":/out/freesurfer/sub-001/mri/T2.prenorm.mgz" exist

Do you mind if I close this one and we keep the conversation there? Please post there whether some of the ideas given in that thread helped you with this instance of it.

1 Like