Fmriprep docker errors; unwarp_wf.resample (tasks)

I’m running a single subject on my machine (mac OS Monterey 2.3GHz 8-core Intel Core i9, 32GB memory) and encountering the same errors despite expanding the available Docker resources (14 CPUs, 30gb memory, 2gb swap, 288gb disk image). The errors I’m encountering are with resample (specifically: “concurrent.futures.process.BrokenProcessPool”).

Here is the call:
docker run --rm -e DOCKER_VERSION_8395080871=20.10.12 -it -v //Users/rb/Desktop/freesurfer/license.txt:/opt/freesurfer/license.txt:ro -v /Users/rb/Desktop/bidscheck:/data:ro -v /Users/rb/Desktop/bidscheck/output:/out nipreps/fmriprep:21.0.1 /data /out participant --participant-label 01 -w /Users/rb/Desktop/bidscheck/workingdir --omp-nthreads 8 --nthreads 12 --mem_mb 30000 --fs-no-reconall

Error example:

Resample error
Node: fmriprep_wf.single_subject_01_wf.func_preproc_ses_03_task_nback_wf.unwarp_wf.resample
Working directory: /tmp/work/fmriprep_wf/single_subject_01_wf/func_preproc_ses_03_task_nback_wf/unwarp_wf/resample

Node inputs:

in_coeff =
in_data =
in_xfms =
num_threads = 8
pe_dir =
ro_time =

Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/plugins/”, line 67, in run_node
result[“result”] =
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/”, line 516, in run
result = self._run_interface(execute=True)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/”, line 635, in _run_interface
return self._run_command(execute)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/”, line 741, in _run_command
result =
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/”, line 428, in run
runtime = self._run_interface(runtime)
File “/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/”, line 321, in _run_interface
) = zip(*outputs)
File “/opt/conda/lib/python3.8/concurrent/futures/”, line 484, in _chain_from_iterable_of_lists
for element in iterable:
File “/opt/conda/lib/python3.8/concurrent/futures/”, line 619, in result_iterator
yield fs.pop().result()
File “/opt/conda/lib/python3.8/concurrent/futures/”, line 437, in result
return self.__get_result()
File “/opt/conda/lib/python3.8/concurrent/futures/”, line 389, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

At face value, it seems to be a memory issue but that seems odd to me given the hardware and resources allocated to Docker. In the beginning of the run, I get the message that some nodes exceed the memory (30gb) given to the process. Am I missing something obvious (very possible… completely new to this)?

Thanks in advance!

Edit: trying again now with --nthreads 12 to see if that helps. Don’t suspect it to, as I still got the “Some nodes exceed warning”.

After some more digging, it seems to be some issue specifically with SDCflows. Not sure if it’s a genuine memory (lacking) issue or something else related to SDCflows…

Just found @oesteban 's post re: memory issues. Perhaps this is related.

Is this an open dataset we can access and try to replicate?