Error in node unwarp_wf.resample

Summary of what happened:

Running fmriprep on a single subject runs into an error during ‘unwarp_wf.resample’. The dataset contains 1 BOLD run, 1 T1w image, 2 fieldmap magnitude and 1 fieldmap phasediff image.

Running with ‘–ignore fieldmap’ tag works fine without errors. Applying fieldmap correction outside of fmriprep (FSL) works fine as well.

Command used (and if a helper script was used, a link to the helper script or the command generated):

Command was used with fmriprep-docker wrapper:
/opt/conda/envs/fmriprep/bin/fmriprep /data /out participant --participant_label control01 --fs-no-reconall --nthreads 8 --stop-on-first-crash --dummy-scans 3 --mem_mb 21000 -v -w /scratch

Version:

fmriprep version 23.1.3 (based on Nipype 1.8.6)

Environment (Docker, Singularity, custom installation):

Docker (v4.21.1)

Data formatted according to a validatable standard? Please provide the output of the validator:

Bids valid.

Relevant log outputs (up to 20 lines):

File: /out/sub-control01/log/20230718-145946_f2d342cc-ea27-4eee-8490-4454be856bdb/crash-20230718-193542-root-resample-ef79e793-bdec-450a-8ff9-467d5ec5b0cb.txt
Working Directory: /scratch/fmriprep_23_1_wf/single_subject_control01_wf/func_preproc_task_socialrewardlearning_wf/unwarp_wf/resample
Inputs: 
in_coeff:
in_data:
in_xfms:
num_threads: 4
pe_dir:
ro_time:
Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node resample.

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sdcflows/interfaces/bspline.py", line 395, in _run_interface
	    ) = zip(*outputs)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/process.py", line 575, in _chain_from_iterable_of_lists
	    for element in iterable:
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
	    yield _result_or_cancel(fs.pop())
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
	    return fut.result(timeout)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
	    return self.__get_result()
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
	    raise self._exception
	concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Screenshots / relevant information:

Hi @tiborst and welcome to neurostars!

How much memory does Docker have access to? This is a docker setting, different than --mem_mb 21000. Also, the --fs-no-reconall flag is not recommended, see here.

Best,
Steven

Thank you for the quick reply @Steven!

Docker has 26gb RAM available as well as 8 CPU cores. We are running our preprocessing locally on a M1 Max Mac Studio.

We used the --fs-no-reconall in the hopes to accelerate the preprocessing pipeline. Also, we are not making use of T1 reconstructed surfaces, so I was under the impression that it was not necessary for us.

Regarding my issue, I should also add that the EPI is quite large (829 volumes). With a similar dataset with 8 EPIs per subject with around 190 volumes each, we did not run into a similar error.

Best regards,
Tibor

Even if you are not using the surface files (CIFTI or GIFTI) in your analysis, spatial normalization works much better in FreeSurfer pipelines (using boundary-based registration).

Could you try raising the memory and/or reducing the cores? And also try a fresh working directory just to be safe?

Best,
Steven

I am currently running with 26gb RAM assigned to fmriprep (30 available to docker) on a fresh working directory. I will report back as soon at it has finished or crashed.

What would be the effect of reducing the cores?

Best,
Tibor

Reducing cores will also reduce memory usage, at the expense of speed.

The last run with 26gb RAM available also crashed during unwarping. Below are the last lines from the terminal:

"
230720-14:48:26,776 nipype.workflow INFO:
	 [MultiProc] Running 1 tasks, and 1 jobs ready. Free memory (GB): 6.00/26.00, Free processors: 4/8.
                     Currently running:
                       * fmriprep_23_1_wf.single_subject_control01_wf.func_preproc_task_socialrewardlearning_wf.unwarp_wf.resample
230720-16:01:25,935 nipype.workflow INFO:
	 [Node] Finished "resample", elapsed time 4835.166302s.
230720-16:01:25,952 nipype.workflow WARNING:
	 Storing result file without outputs
230720-16:01:25,984 nipype.workflow WARNING:
	 [Node] Error on "fmriprep_23_1_wf.single_subject_control01_wf.func_preproc_task_socialrewardlearning_wf.unwarp_wf.resample" (/scratch/fmriprep_23_1_wf/single_subject_control01_wf/func_preproc_task_socialrewardlearning_wf/unwarp_wf/resample)
230720-16:01:27,443 nipype.workflow ERROR:
	 Node resample failed to run on host 14d0cb4fd141.
230720-16:01:27,503 nipype.workflow ERROR:
	 Saving crash info to /out/sub-control01/log/20230720-110459_3210c700-2800-4c8c-a352-7ba02b659386/crash-20230720-160127-root-resample-07eee0c3-47dc-4dc2-9b62-ba0f30efbdff.txt
Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node resample.

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sdcflows/interfaces/bspline.py", line 395, in _run_interface
	    ) = zip(*outputs)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/process.py", line 575, in _chain_from_iterable_of_lists
	    for element in iterable:
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
	    yield _result_or_cancel(fs.pop())
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
	    return fut.result(timeout)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
	    return self.__get_result()
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
	    raise self._exception
	concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.


230720-16:01:27,698 nipype.workflow CRITICAL:
	 fMRIPrep failed: Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node resample.

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sdcflows/interfaces/bspline.py", line 395, in _run_interface
	    ) = zip(*outputs)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/process.py", line 575, in _chain_from_iterable_of_lists
	    for element in iterable:
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
	    yield _result_or_cancel(fs.pop())
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
	    return fut.result(timeout)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
	    return self.__get_result()
	  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
	    raise self._exception
	concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.


230720-16:01:29,979 cli ERROR:
	 Preprocessing did not finish successfully. Errors occurred while processing data from participants: control01 (1). Check the HTML reports for details.
fMRIPrep: Please report errors to https://github.com/nipreps/fmriprep/issues
"

I will try to re-run it with fewer cores and see if that helps.

With this dataset we acquired (2) fewer slices for the fieldmaps compared to the EPIs due to a mistake. So 2 slices at the inferior end are ‘missing’ in the fieldmaps. Could this be causing the problems? However, I should note that when running a manual fieldmap distortion correction in FSL, it did work just fine.

Dear @Steven,

I have tried running fmriprep with a max of 4 cores to reduce the RAM demand. This runs into the same error, which lead me to believe that it is not a problem of computing power or resources.

What about the issue with unequal brain coverage between fieldmaps and EPIs? Could this be causing the error?

Best,
Tibor

Hi @tiborst.

Did you clear the work directory (or use a different work directory) before retrying?

Yes, I am always using a fresh folder as the working directory.

Best,
Tibor

A quick update:
I have been able to fix my problem by reverting to an older version of fmriprep (v20.2.7) which I believe is using a different method for SDC (I got the idea from another thread: Fieldmap correction gone wrong - #13 by jsein). This has fixed my problems entirely. Therefore I will go ahead and mark this as a solution.

Thank you very much for your help @Steven!