I am having issues with running fMRIprep on my new Mac M2 pro, 16GB, ~2TB available.
I have also added the Use Rosetta for x86/amd64 emulation on Apple Silicon option in Dockers as kindly recommended in another post of mine.
Further, I learnt that where the data is written is important so the current set up has the raw data on the HD; while derivatives and working folder are on the actual computer (reading and writing to hard drives apparently slows down the processing and yields other issues in some circumstances).
Issue is that it appears to create all the files except the html quality file and there are no out put files with errors.
Can anyone help me solve this error? It seems all files have been created.
Thank you,
Ilaria
Command used (and if a helper script was used, a link to the helper script or the command generated):
Data formatted according to a validatable standard? Please provide the output of the validator:
PASTE VALIDATOR OUTPUT HERE
Relevant log outputs (up to 20 lines):
The only message I get at the end is:
240306-19:57:20,989 nipype.workflow INFO:
[Node] Finished "conf_plot", elapsed time 6.742654s.
exception calling callback for <Future at 0x7fffea254250 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
callback(self)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
result = args.result()
File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
Usually I see this error due to running out of memory. Does this error persist when running on only a subset of data (e.g., by using a --bids-filter-file)? Without seeing the tree log of an fmriprep output directory we cannot say whether all files were created successfully or not. Also, not related, but --fs-no-reconall is not recommended as it disables boundary-based registration and other surface based methods, which tend to outperform volumetric registration methods.
Can you increase the amount of memory you are providing to the docker container? This can be done in the options of docker desktop under Resources - Advanced:
Thank you for following up on this.
The old Mac Mini was a 16gb Dual Intel Core i7.
Regarding Docker, I have increased the amount of memory. I have not maxed it out though. What would be the amount of memory that I can provide without completely freezing the computer? I don’t want to end up assign too much if that is a problem as well.
I do not know much about the optimizing the Mac ARM chips for these analyses. Without knowing how much data you have per subject, it is hard to say what a reasonable memory usage or time estimate is, but I typically try to give at least 16GB, and 10+ hours is not unreasonable for me if starting from scratch.
If the use of --bids-filter-file to process data piecewise is too tedious, I would recommend trying brainlife.io which enables cloud based parallelization of apps such as fMRIPrep.