Using synthetic field map from synbold-disco with fmriprep

Hi @roeysc - I can’t speak to fmriprep particulars, but can hopefully give some overall insight.

The total readout time for the synthesized image should indeed be 0 (infinite bandwidth).

This next statement has some caveats: The total readout time for the acquired (distorted) image doesn’t matter, and we typically default to something like 0.05. If you were to use topup to estimate the field, then applytopup to apply this field to the unwrapping process, as long as you give applytopup the acquisition parameters (i.e., the total readout time you gave topup) it will correctly scale the field.

However, if the units/magnitude of the field map matters for subsequent processing, then the total readout time matters. I’m not sure how fmriprep chooses to estimate and apply the field, so this value possibly does matter.

We also also recommend using reverse PE or field maps when those are available, and only using the synthesis method when these are unavailable. Hope that helps answer some of the questions.

1 Like

Hi,

Apologies for reviving this old topic. I tried the approach mentioned above:

The SynBOLD outcome (BOLD_s_3d.nii.gz) seems quite nice:
image

But the results of fMRIprep (v 25.0.0) are not promising:

Note the extended blue areas into the brain:

Note the shady areas in the frontal, occipital, and brainstem regions. It was more severe in some other subjects.

And for another subject:

Here is the fMRIprep command in its report, ran by Singularity:

 /opt/conda/envs/fmriprep/bin/fmriprep /input/ /derivatives/fmriprep_25.0.0_nosdc --skip_bids_validation --fs-license-file /toolboxes/license.txt participant --participant-label sub-01 -t rest --output-spaces MNI152NLin6Asym:res-2 MNI152NLin2009cAsym -w /work/ --stop-on-first-crash --nthreads 8

I have not tried fMRIprep’s “syn” pipeline per se, and I have no experience with which approach, SynBOLD or fMRIprep’s Syn, results in better SDC. Any guide/help is appreciated.

Best,
Amir

I’m a bit confused here, shouldn’t this step

  • Take the synthesized undistorted BOLD-contrast image (BOLD_s_3D.nii.gz), place in the fmap folder, give it a name such as sub-xx_ses-xx_acq-synbold_dir-PA/AP_epi.nii.gz, where the dir-<> label is the opposite of what your original BOLD is.
  • Make your own JSON. Just needs a few fields: (1) TotalReadoutTime - make it whatever you define at runtime or the default (1 second), (2) PhaseEncodingDirection - make it the opposite as your original image (so add or remove the minus sign), (3) Add the IntendedFor and/or B0FieldSource/Identifier fields to link the fieldmap to your BOLD image. The JSON should be called sub-xx_ses-xx_acq-synbold_dir-PA/AP_epi.json.

use a TotalReadOutTime of close to 0? The idea is the synthesized image is close to undistorted, so it should have very small total readout time. For DWI, I have done

‘TotalReadoutTime’: 0.0000001
‘EffectiveEchoSpacing’: 0.0

Hi all,

I am trying to do the same - use the fieldmaps generated by SynBOLD-disco with fMRIprep. I trialled renaming the top-up outputs and putting them into the sub-XX/fmap/ folder.
I’m a bit confused as to what goes where and what info goes into the json.
The topup_results_field.nii.gz is the same dimensions as the functional run - does that become sub-XX_ses-BL_field.nii.gz? And then the topup_results_fieldcoef.nii.gz becomes sub-XX_ses-BL_magnitude.nii.gz? And then you put the functional runs as the Intended for files in the json? And then add the readout time? And this would be a
Apologies if this is a daft question - I’m not familiar with the topup outputs and how they’re typically used.

Thanks so much!!

1 Like

Hi @Maz and welcome to neurostars!

This was already answered earlier in the thread.

Best,
Steven

Hi,

Thanks so much Steven, you’re right, I should have been able to follow this. I was just a bit surprised that the undistorted image works as a field map but I guess that’s because it’s before topup has been applied so its more like the reverse phase, rather than a corrected image?

Thank you,
Maz

Hi @Maz,

Yup!

Best,
Steven