SDC error using fmriprep v22.1.1 and v22.1.0

Summary of what happened:

For a couple of data sets SDC does not seem to work using fmriprep v22.1.0/22.1.1. For most of the participants with the same data structure, the pipeline seems to work (~180 datasets) but for a few I get the same error message regarding bs_filter node giving errors. My fieldmap has 2 magnitude files and 1 phase difference image. While executing fmriprep v22.1.0, we noticed a weird distortion of the image, and thus we BET extracted the mag and the phase diff images using funtions in FSL before applying fmriprep. The datasets in question did not give error messages when v20.2.3 was used.

Command used (and if a helper script was used, a link to the helper script or the command generated):


fmriprep_sub.py /project/******/MRI/bids_v3.7.4 -o /project/3011099.01/MRI/fmriprep/fmriprep_v22.1.1_LD -m 84000 -p 064 -a "--skip_bids_validation -t LexicalDecision --use-aroma --aroma-melodic-dimensionality 100 --ignore slicetiming --output-spaces MNI152NLin6Asym:res-2 T1w --dummy-scans 5"

fmriprep_sub.py is a wrapper python script to run fmriprep

Version:

fmriprep v22.1.0 and v22.1.1

Environment (Docker, Singularity, custom installation):

singularity container

Data formatted according to a validatable standard? Please provide the output of the validator:

bids-validator did not give any error messages

Relevant log outputs (up to 20 lines):

Error message:

Node Name: fmriprep_22_1_wf.single_subject_064_wf.fmap_preproc_wf.wf_FM0.bs_filter

File: `/project/********/MRI/fmriprep_v22.1.0_LD/sub-064/log/20221220-123516_9bd453bc-f81c-4066-8e0d-2b9042cb7fdf/crash-20221220-125616-atstak-bs_filter-3692ad96-2797-4aa7-b09a-a0564718a548.txt`
Working Directory: `/scratch/atstak/48315571.dccn-l029.dccn.nl/sub-064/fmriprep_22_1_wf/single_subject_064_wf/fmap_preproc_wf/wf_FM0/bs_filter`
Inputs:

* bs_spacing: `[(100.0, 100.0, 40.0), (16.0, 16.0, 10.0)]`
* debug: `False`
* extrapolate: `True`
* in_data:``
* in_mask:``
* recenter: `mode`
* ridge_alpha: `0.01`
* zooms_min: `4.0`

Screenshots / relevant information:

From the o-file of the failed dataset:

230125-02:12:51,516 nipype.workflow WARNING:
	 [Node] Error on "fmriprep_22_1_wf.single_subject_064_wf.fmap_preproc_wf.wf_FM0.bs_filter" (/scratch/atstak/48453721.dccn-l029.dccn.nl/sub-064/fmriprep_22_1_wf/single_subject_064_wf/fmap_preproc_wf/wf_FM0/bs_filter)
230125-02:12:52,155 nipype.interface INFO:
	 Approximating B-Splines grids (5x5x6, and 15x15x15 [knots]) on a grid of 52x52x32 (86528) voxels, of which 83824 fall within the mask.
230125-02:12:53,135 nipype.workflow ERROR:
	 Node bs_filter failed to run on host dccn-c057.dccn.nl.
230125-02:12:53,143 nipype.workflow ERROR:
....
.....
.....
....

230125-07:56:47,495 nipype.workflow CRITICAL:
	 fMRIPrep failed: Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node bs_filter.

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/lib/python3.9/site-packages/sdcflows/interfaces/bspline.py", line 186, in _run_interface
	    data -= np.squeeze(mode(data[mask]).mode)
	ValueError: operands could not be broadcast together with shapes (52,52,32) (0,) (52,52,32) 


230125-07:56:50,519 cli ERROR:
	 Preprocessing did not finish successfully. Errors occurred while processing data from participants: 064 (1). Check the HTML reports for details.

*The crash report states:*

Node: fmriprep_22_1_wf.single_subject_064_wf.fmap_preproc_wf.wf_FM0.bs_filter
Working directory: /scratch/atstak/48315089.dccn-l029.dccn.nl/sub-064/fmriprep_22_1_wf/single_subject_064_wf/fmap_preproc_wf/wf_FM0/bs_filter

Node inputs:

bs_spacing = [(100.0, 100.0, 40.0), (16.0, 16.0, 10.0)]
debug = False
extrapolate = True
in_data = <undefined>
in_mask = <undefined>
recenter = mode
ridge_alpha = 0.01
zooms_min = 4.0

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node bs_filter.

Traceback:
	Traceback (most recent call last):
	  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/lib/python3.9/site-packages/sdcflows/interfaces/bspline.py", line 186, in _run_interface
	    data -= np.squeeze(mode(data[mask]).mode)
	ValueError: operands could not be broadcast together with shapes (52,52,32) (0,) (52,52,32)

Are you able to share a subject that it fails on for us to identify the problem?

For now, the best bet is probably to fall back to the LTS series (20.2.7) and share datasets that fail on 22.1.

Hi, yes I can share the data set. What is the best way to do this?

Hi @Atsuko. Thanks for your patience. I’ve run your data on the upcoming 23.0.0 release, and it seems to be working okay. I have seen the error you showed for datasets with extremely low SNR in their magnitude images, but I’m not seeing it in yours.

I’ll be releasing 23.0.0 on Monday, and maybe you could test it out?

Thank you for working on this issue, Chris.
Would the new version also fix the weird looking B0 map that I see in some participants’ data? Like the below?

If that occurs in either of the subjects you shared with me, then it seems like it should be resolved. That said, looking at the green contour, it looks like masking is once again the problem, and I have not yet been able to resolve mask-related problems. It’s possible that the issue is non-deterministic, in which case deleting the fmap_preproc_wf subdirectory of the working directory (e.g., fmriprep_23_0_wf/single_subject_ANON01_wf/fmap_preproc_wf/) might produce a different result.

Do you mean that if the fieldmap correction is not applied to the functional data?

Sorry, no, I was referring to the image that you showed in your last post. Are you also having issues with fieldmaps being calculated but not applied?

The fieldmap correction is applied to this data set, but I was a bit worried about the kind of SDC correction taking place when the B0 map looks like the above. The correction seems to be ok though looking at the image of the corrected functional data.

Okay. This looks like it may be either a bad figure or a polluted working and/or output directory. We’re releasing 23.0.0 today. If you still see weird images when running with a fresh working and output directory, please feel free to open up an issue on fMRIPrep with your full command. If you’re able to share the dataset again (or reproduce it on one of the subjects you provided), that would be great.

There were 3 data sets out of 205 where fieldmaps were calculated but was not applied. No error messages were reported though at the end.

fMRIPrep does not consider it an error not to find fieldmaps to apply to a BOLD file. Could you share the JSON of the uncorrected BOLD and the fieldmaps that were supposed to correct them?

Ahhh, now looking at the json files of the fieldmaps of the problematic data sets that i mentioned above (having fieldmaps but no SDC was applied), I see that “IntendedFor” part was wrongly assigned for these specific data sets. My excuses!

1 Like

After removing the mag2 files from the BIDS folder and re-run the pipeline, fieldmap B0 looks better now.
After:

Before:

1 Like