BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending

Summary of what happened:

Hello:

I am having issues with running fMRIprep on my new Mac M2 pro, 16GB, ~2TB available.
I have also added the Use Rosetta for x86/amd64 emulation on Apple Silicon option in Dockers as kindly recommended in another post of mine.

Further, I learnt that where the data is written is important so the current set up has the raw data on the HD; while derivatives and working folder are on the actual computer (reading and writing to hard drives apparently slows down the processing and yields other issues in some circumstances).

Issue is that it appears to create all the files except the html quality file and there are no out put files with errors.

Can anyone help me solve this error? It seems all files have been created.
Thank you,
Ilaria

Command used (and if a helper script was used, a link to the helper script or the command generated):

fmriprep-docker /Volumes/NENS01/BIDS/raw \
/Users/nens.lab/Desktop/TempProcessingFolder/derivatives \
participant \
--participant-label 004 \
--fs-license-file /Volumes/NENS01/fsl_license/license.txt \
--skip-bids-validation \
--fs-no-reconall \
--low-mem \
--dummy-scans 1 \
--output-spaces MNI152NLin2009cAsym:res-1 \
-w /Users/nens.lab/Desktop/TempProcessingFolder/workingFolder

Version:

Environment (Docker, Singularity / Apptainer, custom installation):

Docker

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

The only message I get at the end is:

240306-19:57:20,989 nipype.workflow INFO:
	 [Node] Finished "conf_plot", elapsed time 6.742654s.
exception calling callback for <Future at 0x7fffea254250 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
    callback(self)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Screenshots / relevant information:


Hi @ilaria,

Usually I see this error due to running out of memory. Does this error persist when running on only a subset of data (e.g., by using a --bids-filter-file)? Without seeing the tree log of an fmriprep output directory we cannot say whether all files were created successfully or not. Also, not related, but --fs-no-reconall is not recommended as it disables boundary-based registration and other surface based methods, which tend to outperform volumetric registration methods.

Best,
Steven

Hi Steven:

This same dataset was preprocessed on a less performant Mac Pro successfully and I can approximately see that everything is there.

I will get the information from the old computer to see how different the two are…
Will be back with that information

ilaria

Can you increase the amount of memory you are providing to the docker container? This can be done in the options of docker desktop under Resources - Advanced:

Hi @stebo85 :

Thank you for following up on this.
The old Mac Mini was a 16gb Dual Intel Core i7.

Regarding Docker, I have increased the amount of memory. I have not maxed it out though. What would be the amount of memory that I can provide without completely freezing the computer? I don’t want to end up assign too much if that is a problem as well.

Thank you,
Ilaria

Just checked, I have 13GB out of 16GB. Can I safely go up?
thanks

@Steven

sorry, I tagged the wrong person on my previous answer to your comment regarding running out of memory.

My old computer was a Mac mini (late 2014) 3GH Dual-core Intel Core i7, 16GB 1600 MHz DDR3.
Ran all my fMRIprep analyses accurately (slowly but well).

My new computer is a Mac Mini Apple M2 Pro with 10-core CPU, 16-core GPU, 16-core Nerual Engine, 16GB unified memory, 2TB SSD storage.

I believe the new one to be more powerful than the old one, yet, I’m having issues.

I have assigned 13GB out of 16GB in Dockers. Is this the problem?

Do you have any insight on how I can make my newer computer work. I’d really like for these analyes to not take 10+ hours.

Do you explude an installation error?

You help is greatly appreciated as it’s impacting my student’s ability to progress in her dissertation.

Thank you,
Ilarai

Hi @ilaria,

I do not know much about the optimizing the Mac ARM chips for these analyses. Without knowing how much data you have per subject, it is hard to say what a reasonable memory usage or time estimate is, but I typically try to give at least 16GB, and 10+ hours is not unreasonable for me if starting from scratch.

If the use of --bids-filter-file to process data piecewise is too tedious, I would recommend trying brainlife.io which enables cloud based parallelization of apps such as fMRIPrep.

Best,
Steven

Thanks, the time I am reporting is without FS! It should be a few hours only based on similar data processed on another non M2 computer.

I will look into brainlife.io but I’d really want to have a functioning system in my lab.

Thanks,
ilaria