Fmriprep stopped due to memory error

Hi,

I ran fmriprep on a participant and received a “memory error” before it can finish its job. By checking the out folder, there were many missing output.

Any ideas that I could get away from the memory error?

Error message:

Let’s see if we can’t get fmriprep working. I believe the memory error is showing up because the machine fmriprep is running on ran out of memory. So the solution will be to either run fmriprep on a machine with more memory available or reduce the amount of memory fmriprep is using.

Could you please give some information about how you’re running fmriprep, specifically:

  1. How did you install fmriprep? (docker, locally, singularity)
  2. What is the operating system you are using (windows, mac osx, or linux)
  3. How much RAM is available on your machine?

The answers to those questions will help me find a solution for you.

Thanks!
James

1 Like

Hi James,

Thank you for your help!

I ran fmriprep using docker and the operating system used was linux. I checked with IT and the RAM available on the vm is 32GB.

Thanks!

Thanks for the information, that eliminated a few common issues it could have been. I can offer a suggestion, but it will be helpful to get a couple more bits of information.

Suggestion: use the --low-mem option in your call to fmriprep. This may reduce the memory usage enough (although 32gb should be more than enough RAM).

Requested Info

  1. what is the exact command you typed in the terminal for this run of fmriprep?
  2. For sub-ADPRC0001, what is the size of the func folder (in the BIDS directory, not in the fmriprep directory), the command should look like this (assuming you are in the top level of the bids directory) du -sh ./sub-ADPRC0001/func.

This could be it https://github.com/poldracklab/fmriprep/issues/1254

Hi James,

Sorry for the delay.

  1. The exact command I ran fmriprep was:

sudo docker run -it --rm -v /data/USERS/KELAN/sourcedata/:/data -v /data/USERS/KELAN/fmriprep/:/out -v /data/USERS/KELAN/sourcedata/license.txt:/opt/freesurfer/license.txt poldracklab/fmriprep:latest /data /out/out participant --participant-label sub-ADPRC0001 --fs-license-file /opt/freesurfer/license.txt

  1. The size of the func folder for sub-ADPRC0001 is 465M.size%20func

Thank you for your help!

Cheers
Kelan

Hi Kelan,

That is a sizable functional folder, so I will stick with my previous recommendation and try the same command with --low-mem specified and see if that fixes your problem.

At the same time, I’ll follow up with @ChrisGorgolewski’s suggestion, and see if that’s the bug that’s causing the issue.

1 Like

Alright, to try Chris’s solution, here are the steps:

  1. clone the branch of fmriprep that contains the fix (arbitrarily assuming you are in the /data directory)
mkdir projects && cd ./projects
git clone -b resource_spec https://github.com/jdkent/fmriprep.git
  1. re-run the docker command with fmriprep patched:
sudo docker run -it --rm \
-v /data/projects/fmriprep/fmriprep:/usr/local/miniconda/lib/python3.6/site-packages/fmriprep:ro \
-v /data/USERS/KELAN/sourcedata/:/data -v /data/USERS/KELAN/fmriprep/:/out \
-v /data/USERS/KELAN/sourcedata/license.txt:/opt/freesurfer/license.txt \
poldracklab/fmriprep:latest /data /out/out participant --participant-label sub-ADPRC0001 \
--fs-license-file /opt/freesurfer/license.txt

If you have any questions about this, please ask.

Best,
James

P.S. also make sure (if you haven’t already) you are running the latest fmriprep version by typing: sudo docker pull poldracklab/fmriprep:latest

2 Likes

Hi @metoyou1226, I have just merged @jdkent’s patch into fMRIPrep. Would you mind testing it for us? I’d walk you through the process if you don’t mind giving it a go.

Thanks!

Hi @oesteban, yes I would love to give it a run. Please let me know of the details.

Thanks!

Hi James,

Thank you very much for your help! I will give it a try.

Regards
Kelan

Hi @metoyou1226,

Since I merged the patch, you can follow @jdkent’s instructions but using the official fmriprep repo:

mkdir projects && cd ./projects
git clone https://github.com/poldracklab/fmriprep.git
1 Like

Hi @jdkent and @oesteban,

Thank you so much for your help!

The command ran without any issues for the selected participant, it took about 14 hours to complete. Would you say it is a reasonable amount of time taken?

Cheers
Kelan

If you are not using the --fs-no-reconall option (https://fmriprep.readthedocs.io/en/latest/usage.html#Surface%20preprocessing%20options) then yes, 14h seems likely.

You can save time in your re-runs of fmriprep by caching the freesurfer outputs or skipping freesurfer (option --fs-no-reconall): https://fmriprep.readthedocs.io/en/latest/workflows.html#surface-preprocessing

1 Like

Hi James,

Sorry to bother you again, it seems like I’m running into memory errors again…But I’m also getting other error messages…Not sure if the two are related?

The command I ran was:
sudo docker run -it --rm -v /data/USERS/KELAN/sourcedata/projects/fmriprep/fmriprep:/usr/local/miniconda/lib/python3.6/site-packages/fmriprep:ro -v /data/USERS/KELAN/sourcedata/:/data -v /data/USERS/KELAN/fmriprep/:/out -v /data/USERS/KELAN/sourcedata/license.txt:/opt/freesurfer/license.txt poldracklab/fmriprep:latest /data /out/out participant --participant-label sub-ADPRC0013 --fs-license-file /opt/freesurfer/license.txt --nthreads 2 --omp-nthreads 4 --mem-mb 16000 --low-mem

I tried limited the memory usage after getting Memory errors but it still comes up.

Below are the error messages/warnings I got:

180913-22:25:24,53 nipype.workflow INFO:
[Node] Finished “fmriprep_wf.single_subject_ADPRC0013_wf.func_preproc_task_rest_run_01_wf.bold_bold_trans_wf.bold_reference_wf.enhance_and_skullstrip_bold_wf.apply_mask”.
/usr/local/miniconda/lib/python3.6/site-packages/nitime/utils.py:980: FutureWarning: Conversion of the second argument of issubdtype from complex to np.complexfloating is deprecated. In future, it will be treated as np.complex128 == np.dtype(complex).type.
complex_result = (np.issubdtype(in1.dtype, np.complex) or
/usr/local/miniconda/lib/python3.6/site-packages/nitime/utils.py:981: FutureWarning: Conversion of the second argument of issubdtype from complex to np.complexfloating is deprecated. In future, it will be treated as np.complex128 == np.dtype(complex).type.
np.issubdtype(in2.dtype, np.complex))
/usr/local/miniconda/lib/python3.6/site-packages/scipy/fftpack/basic.py:160: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
z[index] = x

And this one

[Node] Error on “fmriprep_wf.single_subject_ADPRC0013_wf.func_preproc_task_rest_run_01_wf.bold_confou013_wf/func_preproc_task_rest_run_01_wf/bold_confounds_wf/signals)
Traceback (most recent call last):
File “/usr/local/miniconda/bin/fmriprep”, line 11, in
sys.exit(main())
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py”, line 342, in main
fmriprep_wf.run(**plugin_settings)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py”, line 595, in ru
runner.run(execgraph, updatehash=updatehash, config=self.config)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/base.py”, line 162, in run
self._clean_queue(jobid, graph, result=result))
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/base.py”, line 224, in _clean
raise RuntimeError(”".join(result[‘traceback’]))
RuntimeError: Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py”, line 69, in ru
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 480, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_i
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 644, in _run_c
result = self._interface.run(cwd=outdir)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/core.py”, line 521, in run
runtime = self._run_interface(runtime)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/nilearn.py”, line 97, in _run_inter
signals.append(masker.fit_transform(self.inputs.in_file))
File “/usr/local/miniconda/lib/python3.6/site-packages/nilearn/input_data/nifti_maps_masker.py”, line 213, i
return self.fit().transform(imgs, confounds=confounds)
File “/usr/local/miniconda/lib/python3.6/site-packages/nilearn/input_data/base_masker.py”, line 176, in tran
return self.transform_single_imgs(imgs, confounds)
File “/usr/local/miniconda/lib/python3.6/site-packages/nilearn/input_data/nifti_maps_masker.py”, line 319, i
verbose=self.verbose)
File “/usr/local/miniconda/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py”, line 362, in __c
return self.func(*args, **kwargs)
File “/usr/local/miniconda/lib/python3.6/site-packages/nilearn/input_data/base_masker.py”, line 98, in filte
memory_level=memory_level)(imgs)
File “/usr/local/miniconda/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py”, line 362, in __c
return self.func(*args, **kwargs)
File “/usr/local/miniconda/lib/python3.6/site-packages/nilearn/input_data/nifti_maps_masker.py”, line 29, in
mask_img=self.resampled_mask_img)
File “/usr/local/miniconda/lib/python3.6/site-packages/nilearn/regions/signal_extraction.py”, line 270, in i
data[maps_mask, :])[0].T
File “/usr/local/miniconda/lib/python3.6/site-packages/scipy/linalg/basic.py”, line 1250, in lstsq
resids = np.sum(np.abs(x[n:])**2, axis=0)
MemoryError

Regards
Kelan

Maybe you should have a look on your docker settings. By default there is an 8GB limitation, if I’m not wrong. Try to raise that.

Hi everyone, I collaborate with Kelan and am currently trying to help with this issue. I know that earlier Kelan said we have 32GB of RAM, but after checking again it seems we only have 16GB.

Is this something we should look to upgrade?

Thanks for all your help on this.

Cheers,
Reece

I’d give --nthreads 1 --omp-nthreads <number of cpus> a shot in that case. Please also check the memory available to the Docker engine.

1 Like

Sorry for taking so long to respond, did you try @oesteban’s suggestion?

Hi everyone – thanks so much for your help. We have got it running (still going; large sample).

We ended up removing the “–nthreads”, “–omp-nthreads”, and “–mem-mb” variables…

Once we got it running we convinced IT to increase the available RAM so now it’s going a bit quicker :slight_smile:

Cheers,
Reece