TypeError & MemoryError with fmriprep-docker

Hello everybody!
I got TypeError & MemoryError when I used fmriprep-docker(21.0.1) to preprocess data but actually I’ve used fMRIprep to the same data with no errors before. And I’m using a computer with 64G RAM,16×2 core, Ubuntu 18.04.

The error I got:

Node: fmriprep_wf.single_subject_01_wf.func_preproc_ses_01_task_rest_run_01_wf.bold_std_trans_wf.bold_reference_wf.get_dummy
Working directory: /scratch/fmriprep_wf/single_subject_01_wf/func_preproc_ses_01_task_rest_run_01_wf/bold_std_trans_wf/bold_reference_wf/_std_target_MNI152NLin2009cAsym.res1/get_dummy

Node inputs:

in_file =
n_volumes = 40
nonnegative = True
zero_dummy_masked = 20

Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py”, line 428, in run
runtime = self.run_interface(runtime)
File “/opt/conda/lib/python3.8/site-packages/niworkflows/interfaces/bold.py”, line 84, in _run_interface
data = img.get_fdata(dtype=“float32”)[…, :self.inputs.n_volumes]
File “/opt/conda/lib/python3.8/site-packages/nibabel/dataobj_images.py”, line 355, in get_fdata
data = np.asanyarray(self.dataobj, dtype=dtype)
File “/opt/conda/lib/python3.8/site-packages/numpy/core/asarray.py”, line 171, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)
File “/opt/conda/lib/python3.8/site-packages/nibabel/arrayproxy.py”, line 391, in array
arr = self.get_scaled(dtype=dtype, slicer=())
File “/opt/conda/lib/python3.8/site-packages/nibabel/arrayproxy.py”, line 358, in get_scaled
scaled = apply_read_scaling(self.get_unscaled(slicer=slicer), scl_slope, scl_inter)
File “/opt/conda/lib/python3.8/site-packages/nibabel/volumeutils.py”, line 959, in apply_read_scaling
arr = arr * slope
numpy.core.exceptions.ArrayMemoryError: Unable to allocate 30.5 GiB for an array with shape (193, 229, 193, 960) and data type float32

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py”, line 67, in run_node
result[“result”] = node.run(updatehash=updatehash)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py”, line 516, in run
result = self.run_interface(execute=True)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py”, line 635, in _run_interface
return self.run_command(execute)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py”, line 741, in run_command
result = self.interface.run(cwd=outdir)
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py”, line 445, in run
runtime.traceback_args = ("\n".join(["%s" % arg for arg in exc_args]),)
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py”, line 445, in
runtime.traceback_args = ("\n".join(["%s" % arg for arg in exc_args]),)
TypeError: not all arguments converted during string formatting

The command I used:

#!/bin/bash
#User inputs:
bids_root_dir=HOME/my/raw2_tutorial subj=01 nthreads=4 mem=20 #gb #Begin: mem=`echo "{mem//[!0-9]/}" #remove gb at end mem_mb=echo $(((mem*1000)-5000))` #reduce some memory for buffer space during pre-processing

export TEMPLATEFLOW_HOME=$HOME/.cache/templateflow
export FS_LICENSE=$HOME/my/raw2_tutorial/derivatives/license.txt

#Run fmriprep
fmriprep-docker $bids_root_dir $bids_root_dir/derivatives
participant
–participant-label $subj
–skip-bids-validation
–md-only-boilerplate
–fs-license-file $HOME/my/raw2_tutorial/derivatives/license.txt
–fs-no-reconall
–output-spaces MNI152NLin2009cAsym:res-1
–nthreads $nthreads
–stop-on-first-crash
–mem_mb $mem_mb
-w $HOME

Thanks a lot!

Hi GretchenMa, are you able to offer any updates on this? I am running into a very similar error in my Docker / FMRIPREP pipeline, although it occasionally seems to try to allocate even higher array sizes, upwards of 100 GiB.

Sorry, I still don’t know why.
But I find that when I change the ‘–output-spaces MNI152NLin2009cAsym:res-1’ to ‘–output-spaces MNI152NLin2009cAsym:res-2’, it works. And I try ‘res-native’, it also works.
Then I try to use the data from openneuro, it works with ‘res-1’& ‘res-2’ & ‘res-native’.
I also free some storage and kill some other processes. Then I never met this error.

Here is the command I am using now. Hope it helps.

#!/bin/bash
#User inputs:
bids_root_dir= /home/test_tutorial
subj=01
nthreads=8
mem=32 #gb
omp-nthreads=8

#Begin:
#Convert virtual memory from gb to mb
mem=`echo "${mem//[!0-9]/}"` 
mem_mb=`echo $(((mem*1000)-5000))` 
export TEMPLATEFLOW_HOME=/home/.local/lib/python3.8/site-packages/templateflow
export FS_LICENSE=/home/test_tutorial/derivatives/license.txt

fmriprep-docker $bids_root_dir $bids_root_dir/derivatives \
participant \
--participant-label $subj \
--skip-bids-validation \
--md-only-boilerplate \
--fs-license-file $/home/test_tutorial/derivatives/license.txt \
--fs-no-reconall \  
--output-spaces MNI152NLin2009cAsym:res-native \
--nthreads $nthreads \
--omp-nthreads $omp-nthreads \
--stop-on-first-crash \
--mem_mb $mem_mb \
-w $HOME

Thanks, this was actually helpful. I think using combinatorial resolutions was what caused the problem for me. Using 1 at a time, along with the low-mem flag seems to have solved it for me.