Fmriprep gone wrong when processing surface related steps

Summary of what happened:

Dear all,

I am using Windows10 docker with WSL2 backend to run fmriprep. I firstly run all non-freesurfer steps with fmriprep and it worked all fine. I then separatly run freesurfer and provided it to fmriprep to compelete the whole pipline. However, the program threw errors to all subjects when each of them running the
Node Name: _parcstats0
Node Name: _parcstats1

Command used (and if a helper script was used, a link to the helper script or the command generated):

docker run -ti --rm                                                            \
    -v /mnt/g/BIDS:/data:ro                          \
    -v /mnt/g/BIDS/derivatives/fmriprep:/out         \
    -v /mnt/g/workdir:/work                          \
	-v /mnt/d/Toolbox/freesurfer7.2.0/license.txt:/opt/freesurfer/license.txt  \
    nipreps/fmriprep:latest             \
    /data /out participant              \
    -w /work                            \
	--participant-label `cat ./docker_run_list.txt` \

Version:

Docker version: 4.17.1 (101757)
Fmriprep version: 23.0.1

Environment (Docker, Singularity, custom installation):

Windows10, Docker, WSL2

Data formatted according to a validatable standard? Please provide the output of the validator:

Yes

Relevant log outputs (up to 20 lines):

Here below is the error log of the command:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node _parcstats0.

Cmdline:
	recon-all -autorecon-hemi lh -nohyporelabel -lh-only -openmp 8 -subjid sub-HC001 -sd /out/sourcedata/freesurfer -notessellate -nosmooth1 -noinflate1 -noqsphere -nofix -nowhite -nosmooth2 -noinflate2 -nocurvHK -nocurvstats -nosphere -nosurfreg -nojacobian_white -noavgcurv -nocortparc -nopial -nocortparc2 -nocortparc3 -nopctsurfcon
Stdout:

Stderr:

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 765, in _run_interface
	    runtime = run_command(
	  File "/opt/conda/lib/python3.9/site-packages/nipype/utils/subprocess.py", line 107, in run_command
	    proc = Popen(
	  File "/opt/conda/lib/python3.9/subprocess.py", line 951, in __init__
	    self._execute_child(args, executable, preexec_fn, close_fds,
	  File "/opt/conda/lib/python3.9/subprocess.py", line 1754, in _execute_child
	    self.pid = _posixsubprocess.fork_exec(
	BlockingIOError: [Errno 11] Resource temporarily unavailable

I checked previous posts and I think it may be an insufficient memory error. Actually there was a warning say memory may be not enough for some commands. If so, how can I approximately estimate how many CPUs are allowed to accomodate my computer memory size?

Hi @LeiGuo0812, and welcome to neurostars!

How many CPUs and memory are you devoting to the job? Did you change from the Docker default?

How many subjects are you running?

It seems like it is still trying to run recon-all. How are you providing your freesurfer inputs to fmriprep?

Best,
Steven

Dear Steven,

Thank you very much for your quick reply.

How many CPUs and memory are you devoting to the job? Did you change from the Docker default?

I have 20 cores in my computer, but I did not make any change in Docker default or through docker run command. I monitored the state of my CPU and it did reached 100% usage sometimes. I am not sure should I set any other parameters to avoid this issue.

How many subjects are you running?

There are 220 subjects in the list. But it did well when I use --fs-no-reconall.

It seems like it is still trying to run recon-all . How are you providing your freesurfer inputs to fmriprep?

I ran freesurfer 7.2.0 with recon-all -s <subj> -i <T1w.nii.gz> -all for all subjects, and renamed the SUBJECTS_DIR as freesurfer, then put it in /out directory of fmriprep, as illustrated in the document. Actually it did skipped autorecon1 and autorecon2, and I saw fmriprep correctly identified prevous freesurfer results in consel log. But it seemed still trying to re-run some steps.

Best,
Lei Guo

How much memory do you have available?

What happens when you run only single subjects at a time? And also trying not supplying freesurfer outputs and using a fresh working directory?

By default, fMRIPrep expects FreeSurfer outputs to be in sourcedata/freesurfer, which is a change from legacy behavior. You can manually set the freesurfer input with --fs-subjects-dir.

Dear Steven,

Thansk for your suggestion!

How much memory do you have available?

I have abourt 40G free memory.

What happens when you run only single subjects at a time? And also trying not supplying freesurfer outputs and using a fresh working directory?

Thansk a lot for the suggestion. I re-ran fmriprep for only one subject in a fresh working directory, but still provided freesurfer results. I worked fine! So I think it may be a memory insufficient issue.

By default, fMRIPrep expects FreeSurfer outputs to be in sourcedata/freesurfer , which is a change from legacy behavior. You can manually set the freesurfer input with --fs-subjects-dir .

Thanks a lot. I put it in out/sourcedata/freesurfer. It did skipped all recon-all steps and finished in less than two hours for one subject.

Therefore, I want to know should I run all subjects in a for loop, or should I specific computer resources more precisely to achieve maximum performance? (I wonder if there is a parallel processing in this way, just for speeding up the process)

I really appreciate your time and help.

Best,
Lei Guo

Great to hear you are making progress!

You can try something like Brainlife.io to run multiple subjects in parallel using cloud-based computer clusters. They have their own support forum on Slack: brainlife.slack.com

Best,
Steven

I’ll check this, thanks again for your help! :satisfied: