I am using Windows10 docker with WSL2 backend to run fmriprep. I firstly run all non-freesurfer steps with fmriprep and it worked all fine. I then separatly run freesurfer and provided it to fmriprep to compelete the whole pipline. However, the program threw errors to all subjects when each of them running the
Node Name: _parcstats0
Node Name: _parcstats1
Command used (and if a helper script was used, a link to the helper script or the command generated):
Data formatted according to a validatable standard? Please provide the output of the validator:
Relevant log outputs (up to 20 lines):
Here below is the error log of the command:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node _parcstats0.
recon-all -autorecon-hemi lh -nohyporelabel -lh-only -openmp 8 -subjid sub-HC001 -sd /out/sourcedata/freesurfer -notessellate -nosmooth1 -noinflate1 -noqsphere -nofix -nowhite -nosmooth2 -noinflate2 -nocurvHK -nocurvstats -nosphere -nosurfreg -nojacobian_white -noavgcurv -nocortparc -nopial -nocortparc2 -nocortparc3 -nopctsurfcon
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
runtime = self._run_interface(runtime)
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 765, in _run_interface
runtime = run_command(
File "/opt/conda/lib/python3.9/site-packages/nipype/utils/subprocess.py", line 107, in run_command
proc = Popen(
File "/opt/conda/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/conda/lib/python3.9/subprocess.py", line 1754, in _execute_child
self.pid = _posixsubprocess.fork_exec(
BlockingIOError: [Errno 11] Resource temporarily unavailable
I checked previous posts and I think it may be an insufficient memory error. Actually there was a warning say memory may be not enough for some commands. If so, how can I approximately estimate how many CPUs are allowed to accomodate my computer memory size?
How many CPUs and memory are you devoting to the job? Did you change from the Docker default?
I have 20 cores in my computer, but I did not make any change in Docker default or through docker run command. I monitored the state of my CPU and it did reached 100% usage sometimes. I am not sure should I set any other parameters to avoid this issue.
How many subjects are you running?
There are 220 subjects in the list. But it did well when I use --fs-no-reconall.
It seems like it is still trying to run recon-all . How are you providing your freesurfer inputs to fmriprep?
I ran freesurfer 7.2.0 with recon-all -s <subj> -i <T1w.nii.gz> -all for all subjects, and renamed the SUBJECTS_DIR as freesurfer, then put it in /out directory of fmriprep, as illustrated in the document. Actually it did skipped autorecon1 and autorecon2, and I saw fmriprep correctly identified prevous freesurfer results in consel log. But it seemed still trying to re-run some steps.
What happens when you run only single subjects at a time? And also trying not supplying freesurfer outputs and using a fresh working directory?
Thansk a lot for the suggestion. I re-ran fmriprep for only one subject in a fresh working directory, but still provided freesurfer results. I worked fine! So I think it may be a memory insufficient issue.
By default, fMRIPrep expects FreeSurfer outputs to be in sourcedata/freesurfer , which is a change from legacy behavior. You can manually set the freesurfer input with --fs-subjects-dir .
Thanks a lot. I put it in out/sourcedata/freesurfer. It did skipped all recon-all steps and finished in less than two hours for one subject.
Therefore, I want to know should I run all subjects in a for loop, or should I specific computer resources more precisely to achieve maximum performance? (I wonder if there is a parallel processing in this way, just for speeding up the process)