Inconsistent image intensity in the averaged fMRI images for coregistrations across runs

Summary of what happened:

Dear all,

We encountered unexpected output from fMRIPrep when processing task-based fMRI data acquired on a Siemens 3 T Cima.X scanner. The dataset includes pairs of spin-echo fieldmaps with reversed phase-encoding directions for distortion correction.
During acquisition, we did not use prescan normalization, so the raw fMRI data naturally show an intensity bias, as expected (see figure 1 below).

The issue is that the image intensity of the *_desc-coreg_boldref.nii.gz files is inconsistent across runs. As we understand, this image is used for coregistration to the T1-weighted image, and a bias-field correction is typically applied to improve alignment. However, this correction seems to succeed in some runs (e.g. run-03 in our case) but not others (e.g. run-02 in our case), resulting in marked intensity differences across runs. We have attached example images illustrating the issue (see figure 2 below). This phenomenon occurred with fMRIPrep v24.1.1, and runs showing this inconsistency also yield weaker or absent BOLD z-score signals in the GLM results.

We verified that the raw and other preprocessed images appear consistent across runs:

  • Averaged raw fMRI images show comparable intensity bias levels (see figure 1).
  • tSNR maps look similar across runs (see figure 1 and 2).
  • *_desc-hmc_boldref.nii.gz images (generated just before the coregistration step) are also consistent (see figure 2).

Thus, the inconsistency seems to arise at the *_desc-coreg_boldref.nii.gz step.

We suspected whether updating the fMRIPrep version would help. When we rerun the same analysis using ver 25.2.3, we found that a degree of inter-run inconsistency in *_desc-coreg_boldref.nii.gz images became much less compared with output from ver 24.1.1. However, perhaps some of the differences may still exist in this case, although it looks much better (see figure 3 below).

We are unsure where this issue originates. Perhaps, there is some instability in the step for applying intensity bias correction for *_desc-coreg_boldref.nii.gz images. Alternatively, there may be issues in applying the susceptibility-induced correction to this image.

We would like to confirm whether this behavior is expected or if it could indicate a preprocessing bug in earlier versions of fMRIPrep. Could you advise on whether any specific settings or tests are recommended to ensure consistent bias correction across runs?

Thank you very much for your time and assistance.

Command used (and if a helper script was used, a link to the helper script or the command generated):

Code section for preprocessing

#!/bin/bash
.
.
.
apptainer run \
  --cleanenv \
  -B "${BIDS_DIR}:/bids:ro" \
  -B "${OUTPUT_DIR}:/out" \
  -B "${FSLicense}:/fslicense.txt:ro" \
  -B "${WORK_LOCAL}:/work" \
  -B "${BIDSDB_LOCAL}:/bidsdb" \
  -B "${FREESURFER_DIR_LOCAL}:/fsdir" \
  -B "${TMPDIR}:${TMPDIR}" \
  "${APPTAINER_IMAGE}" \
  /bids /out participant \
  --participant-label "${sub_ID}" \
  --fs-license-file /fslicense.txt \
  --fs-subjects-dir /fsdir \
  --work-dir /work \
  --bids-database-dir /bidsdb \
  --nthreads "${NTHREADS}" \
  --omp-nthreads "${OMP_NTHREADS}" \
  --mem_mb "${MEMMB}" \
  --output-spaces T1w:res-native \
  --skip-bids-validation \
  --resource-monitor

Version:

fMRIPrep v24.1.1 and fMRIPrep v25.2.3

Environment (Docker, Singularity / Apptainer, custom installation):

Apptainer

Data formatted according to a validatable standard? Please provide the output of the validator:

The file structure was confirmed to be BIDS-valid using the online BIDS Validator.
The IntendedFor field was manually added to the AP and PA fieldmap JSON files using the following format:
ses-*/func/sub-*_ses-*_task-fMRI_run-*_bold.nii.gz
Susceptibility distortion correction was successfully applied to all functional images, as confirmed by visual inspection of the fMRIPrep reports.

Screenshots / relevant information:

Operaing system: Ubuntu 24.04.2 LTS


Hi @Ajisai , welcome to Neurostars!

This seems related to this thread: Coregistration failing randomly between tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz and SBref · Issue #3163 · nipreps/fmriprep · GitHub

I didn’t see any solution yet.

It does look like INU correction is not performing correctly in those runs.

@Ajisai what’s missing in this report is the SVG file in fMRIPrep’s report that shows misalignment between the anatomy and the BOLD data. If this failing correction biases co-registration, I would expect misalignment between the T1w and the bold data.

If there’s no misalignment or the misalignment is minor (and hence nuisance regression is not very off, etc), then what you report about z-scores suggests that these runs are substantially different for reasons other than preprocessing.

WDYT @effigies?

Hi, @jsein

Thank you very much for the information! I’ll check with my colleagues to see if we can find a practical solution for this issue.

Hi, @oesteban

Thank you very much for your reply!

You raised a very good point about checking the SVG files. I’m attaching them here for reference.

Below are the SVG files showing the co-registration between the T1w and BOLD data. There are four files corresponding to run-02 and run-03 processed with fMRIPrep v24.1.1 and v25.2.3, respectively:
sub-003_ses-OFSpeed2_task-fMRI_run-02_desc-coreg_bold_fMRIPrep-ver24.1.1.svg
sub-003_ses-OFSpeed2_task-fMRI_run-03_desc-coreg_bold_fMRIPrep-ver24.1.1.svg
sub-003_ses-OFSpeed2_task-fMRI_run-02_desc-coreg_bold_fMRIPrep-ver25.2.3.svg
sub-003_ses-OFSpeed2_task-fMRI_run-03_desc-coreg_bold_fMRIPrep-ver25.2.3.svg
By visual inspection, I didn’t see a clear misalignment in any of these four files.

Similarly, the following files show the quality of susceptibility distortion correction (SDC):
sub-003_ses-OFSpeed2_task-fMRI_run-02_desc-sdc_bold_fMRIPrep-ver24.1.1.svg
sub-003_ses-OFSpeed2_task-fMRI_run-03_desc-sdc_bold_fMRIPrep-ver24.1.1.svg
sub-003_ses-OFSpeed2_task-fMRI_run-02_desc-sdc_bold_fMRIPrep-ver25.2.3.svg
sub-003_ses-OFSpeed2_task-fMRI_run-03_desc-sdc_bold_fMRIPrep-ver25.2.3.svg
In the post-SDC images from v24.1.1, I noticed a clear superior–inferior intensity gradient (a “dome”-like brightness toward the top of the brain) in run-02 but not in run-03. In the v25.2.3 outputs, this gradient is much less pronounced, though still visible.

Hope this information helps clarify what’s happening.

Okay, then it’s safe to say that you should not worry about this INU correction failure. These desc-coreg references are generated for the sole purpose of co-registering to the T1w and it does seem to work nonetheless (a bit unsurprising if bb_register was used, as it looks at the GM/WM contrast edge to drive registration, and INU correction will unlikely affect that step by too much).

That’s constrained within fMRIPrep’s responsibility. However, the overarching question of why you get consistently lower activation contrast for the INU resistant runs seems to indicate that there is some artifact on those runs (spikes?) that confuse N4BiasFieldCorrection and then alter z-scoring by so large.

Okay, this is a different issue. May I ask you to (i) open a new discussion here in neurostars to talk about it, and (ii) send us, or post here, the full reports for these four examples? Sending us the full HTML report will help us make a more general assessment of the situation.

Hi, @oesteban

Below are two zip folders containing all SVG and HTML report files for sub-003. One was preprocessed with fMRIPrep v24.1.1, and the other with fMRIPrep v25.2.3.
You can find the information for run-02 and run-03 of ses-OFSpeed2 in the respective HTML reports.
Preprocessing_report_for_sub-003_fMRIPrep-ver24.1.1.zip
Preprocessing_report_for_sub-003_fMRIPrep-ver25.2.3.zip

Thank you very much for your kind assistance.

As for the original issue, the “dome”-like effect is just an artifact in the intermediate result for the visualization. It doesn’t seem to have any correlation with better/worse SDC and therefore, I don’t think you should worry specifically. I would though keep an eye on whether these inhomogeneities are present in the output files by fMRIPrep, and I don’t have an answer to whether fMRIPrep should account for those or not.

In principle, if you were to calculate correlations, INU correction should not have a substantial effect. And for voxelwise GLMs, neither. I would check this by pitting the two versions you have (24.1.1 vs. 25.2.3) to make sure I’m not telling you something wrong.

If you happen to try an experiment as such, please do report back here; it would be of great relevance to this community.

Hi, @oesteban

Thank you very much for your reply!

We are currently running another round of fMRI scans using the same protocol and experimental paradigm. This time, we have enabled prescan normalization to examine whether the absence of prescan normalization might have contributed to the issue we observed.

Although it may take some time to complete the new scans and analyses, we will report back to the community. We plan to post a new topic summarizing the differences between the prescan-on vs. prescan-off data, as well as the differences we observe between fMRIPrep v24.1.1 and v25.2.3.

Thanks again for your help and guidance.