I am using SyN-based susceptibility distortion correction with the --use-syn-sdc flag. The data is from a GE 3T scanner. I have just run it on two subjects for now and it seems to run fine without error, however there appears to be an issue with the before and after images of the SDC and I’d like to check what might be causing this (see images below).
On visual inspection it looks like the ‘before’ and ‘after’ labelling is the wrong way around. Could this be a possibility? The snippets from the before image look much cleaner, and the after image looks more stretched and distorted. I’ve run the fmriprep pipeline separately a few times for this participant (default, with no flags other than sdc), once each with PhaseEncodingDirection as +y and then -y in the .json file. I’ve also excluded PhaseEncodingDirection from the .json file to allow fmriprep to calculate the best direction for SDC itself according to the documentation (assuming that it only pulls this information from the .json file…). All versions show similar output for the SDC.
One final related note is that a fieldmap was actually collected for this participant but in a pre-processed form (which we decided we will not use), so there is an fmap folder containing two nifti files : b0_raw_phase.nii.gz and b0_raw_mag.nii.gz. Considering the pipeline seems to use the syn-sdc flag I’m doubting that it pulls on the files in this folder, but thought I should include that information in case there is some incompatibility.
Would you have any idea what might be causing this?
Thanks in advance!
Could you share the full HTML report - it would help us get a better view.
In general any distortion correction will require application of a non-linear warp and thus interpolation (which causes smoothing). In your case it seems that the original data are not suffering from much distortion so it might not be worth while applying the correction.
As for the fieldmaps they have to be stored and annotated appropriately for FMRIPREP to pick them up (BIDS validator probably pointed out that b0_raw_phase.nii.gz and b0_raw_mag.nii.gz are not part of BIDS). More details can be found in the BIDS spec (section 8.9).
Thanks for the quick response and info. Here’s the HTML file:
I would agree that, for the task “rest” registration used by SDC-SYN didn’t work very well. For the other tasks, it worked fairly decently, IMHO.
This poor correction you are seeing on the “rest” run does not have a clear solution for now. I’m opening this issue https://github.com/poldracklab/fmriprep/issues/747 that could fix for these problematic runs, but I think it will take some time before we see that implemented in fmriprep (unless an interested individual submits a PR with this feature…). But we are pretty swamped right now as to take on this in a near future.
Although the SDC-SYN appears to work better for the other task runs, it still results in smoothing and “smudging” of the image (as mentioned by Chris) in a way that, visually, seems to compromise the data. Would you then recommend still using the SDC-SYN for those other runs in spite of that, as numerically the estimation performs well and is still superior in reinstating the original brain structure?
It is important to point out that the images you are seeing in the report are not exactly what you get as final output, they are “pictures” at each stage of processing that enhance certain aspects for faster visual assessment.
What does this mean? That the “smudging” effects you see in the report panels should not be present in the final data. SDC-SYN estimates a warping that, as you mention, tries to reinstate the original brain structure. The reports present one reference image derived from the BOLD timeseries with enhancements for registration and visualization. So the main focus of the SDC-SyN panels is the accuracy of that estimation. Simply, do the contours align better with data after SDC-SyN? If so, then you should use the correction (IMHO).
FMRIPREP is designed in a way that only one step of resampling is done, precisely to avoid smudging data. So, the transforms from head motion estimation and susceptibility distortion estimation are combined (with other pertinent transforms, for instance if you want data in MNI space) and applied in only one step. For this reason, it is important that the fieldmap is estimated with accuracy.
In your reports, I could see that, arguably, SDC-SyN performed “well” on most of the cases. But distortion is exaggerated for the “rest” task. In that only case, you should avoid the fieldmap estimation done with SDC-SyN.
Finally, as Chris mentioned, if distortion is not too big (I cannot say anything about how to judge this) it may not be worth including the field map, as over-correction and correction on the wrong directions are much worse than not correcting.
Thanks for the details and advice on how to inspect the output.
Given there are quite a few runs per participant, I’ve noticed that sometimes the contours don’t align very well in differing runs across participants, so it is likely I will not use the fieldmap-less estimation. However I have been running it in general to inspect the output, so let me know if you would like me to send examples of it not working very well in case you need this information for future fixes and development.
Quick general question about the fieldmap-less estimation: We are currently testing out the fieldmap-less vs. no fieldmap correction with our functional data. In the .html output doc, the references listed with this approach (Wang et al. 2017; Huntenburg 2014; Treiber et al. 2016) are all papers of studies with DTI. in the fMRIPrep documentation, it says that this is an experimental approach. Are there recommendations, and/or has this been tested with functional data? We are hesitant to use it if the rational for this approach is all based on studies of DTI. Thanks for any clarification about this!
The experimental label has been there quite a while, which may be deceptive. The issue is that there isn’t a ground truth that we can optimize toward in a principled way; we tried the technique on a lot of data, and found some good parameters. There is not yet a systematic comparison with more direct estimates of susceptibility distortion, so it should be treated with some level of skepticism.
I would say this shouldn’t stop you from using it. Like with any other steps, it’s important to understand what it’s doing, what its limitations are, and to inspect the results. So I would recommend trying some subjects with and without, and seeing whether it seems to help.