Repeated fmriprep distortion; perhaps from susceptibility distortion

We occasionally have very strange fmriprep preprocessing output, such as the brain in the _space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz image being clearly not MNI-shaped or not centered. These errors often correct themselves when the preprocessing is repeated from the beginning (similar to this).

This “do it again until it works” strategy is failing for a particular session, however, and I am out of debugging ideas. Three runs in one session for a DMCC participant are affected, AP encoding runs only. Fieldmaps were collected at the beginning of the session, then two runs (AP then PA) of each task, in the order Stern, Stroop, Cuedts, Axcpt. The distortion increased over the course of the session; Stern seems fine, Stroop a bit affected, Cuedts and Axcpt worst. The participant did not leave the scanner during the session.

Here are temporal mean images to give an idea of the distortion; the lower row are the Stern runs (AP in first three columns, PA in second), the top Cuedts.

This person’s motion is quite typical (better than many); the dicoms and dcm2niix-converted functional niftis also look typical, as do the SBRef and fieldmap images. Our standard pipelines use older versions of the software; for debugging we converted the dataset using the newest version of dcm2niix and fmriprep 20.2.3, but without noticeable improvement.

Any suggestions for fixing this? Or ideas of what is going wrong? The last two AP runs of this session (Cuedts and Axcpt) are in the BIDS subdirectory of https://wustl.box.com/s/rp7wop16km39736r0thsq9o54h2klmhf, with accompanying anatomy*, SBRef, and fieldmap files. The fmriprep 20.2.3 output is in the fmriprep subdirectory, including the html summary file.

Thanks!

* for debugging we did not include defacing, so I removed the T1 and T2 images from the BIDS version; let me know if you need them.

1 Like

Hi,

Can you try upgrading to fmriprep 21.0.1 and using a new work directory (freesurfer outputs should be okay to reuse)? The susceptibility distortion correction workflows were overhauled.

Best,
Steven

Thanks for the suggestion; we’re setting it up to try but likely won’t have the result until early next week.

I had a look at your fmaps and functional data, I agree with @Steven that the problem stems from the geometric distortion correction processes that are used to “fix” the susceptibility artifacts.

It seems that there are very heavy distortions + ghosting artifacts in the inferior portions of your spin echo images (the fmaps as they are called in your data structure). Parts highlighted by the crosshair:

I would bet that the “good” subjects will not have such bright-floaty-smeary artifacts around the ear cavities. Thinking how e.g. FSL-TOPUP computes the distortion field and warps the images, it would not be surprising that these bright artifact signals cause large warping errors. One way of dealing with this could be applying a tighter brain mask directly on these fmap images.

Hope it is useful in some way. Good luck :slight_smile:

1 Like

Nice investigation! A good follow-up comparison would be to compare the SYN fieldmap-less approach vs TOPUP that 21.0.1 would use, which would provide more info regarding if the error is derived from the acquired fmaps. Another possibility is to try only use the fmap that is opposite phase encoding to your BOLD run. That is, if your BOLD is AP, then ignore the AP fieldmap and only use PA fieldmap. I imagine if the problem is in the fmaps, then this wouldn’t help much, but at least one phase enconding direction would come from your BOLD run. Curious to see how this ends up!

Steven

1 Like

Thank you all! I will report back early next week, when we should have the 21.0.1 results. I will definitely also investigate the artifacts around the ears in the fmaps; I have many images to compare against and haven’t investigated how the degree of artifacts in the fmap goes with these sorts of distortions.

@ofgulban I’ve never considered masking the fmaps, but it is an interesting idea for these sorts of unusual cases. Have you had success masking?

1 Like

Hi @jaetzel ,

I have tinkered a bit with your data and fsl-topup. Here are my insights:

  1. There are very strong artifacts within the inferior part of the EPI slab. They seem to be a mix of susceptibility artifacts and ghosts. As I suggested above, I have applied a simple fsl-bet brain mask remove these (see gif below, around crosshair). Unfortunately, it seems that these strong artifacts penetrate your brain tissue too. So it is not really possible to remove them in AP runs.

  2. I have compared the default fsl-topup vs a topup based on the masked fmaps. You can see that the “masked topup” yields less “blown up” dark artifact within the inferior temporal cortex. However, also see that the initial gradient echo image (aka BOLD image), is already quite a bit affected. See the large darkness within the rings. Therefore I think that the best idea might be to label this general region as “artifact-dominated” and maybe remove them from your further analyses.

A funny side note is that fsl-topup does not like zeros when you input masked fmaps. A simple imputation of zeros with 1s makes the program execute. Another idea might be imputing 0s with low magnitude & spread Gaussian noise (similar noise characteristics to e.g. the air voxels in EPI)

Another side note is that I don’t know if fmriprep uses a different topup configuration. E.g. the subsampling factor --warpres and the subsampling scheme --subsamp parameters could be important to contain the “blowing up” of the artifact ridden region.

Thank you! I think your diagnosis of the problem as due to the poor fmap is spot on. This participant was in in two waves of scanning, three sessions each wave. I pulled the other five pairs of fmaps and added them to the shared box folder.

I plotted slices from each of them in order of session date in a knitr pdf now in the root of that box directory. This person’s sessions were closer together in time than many of our participants; all six sessions were collected over less than six months. The two sessions from late September (wave1rea and wave1pro) have clearly more distortion in the fmaps than the rest:

Looking at these, it’s surprising we didn’t have more issues with preprocessing these sessions! My QC procedure has been just checking that the fmaps have brains; clearly I should look more closely. The distortion in the two late September sessions seems similar; suggesting this was a scanner problem, rather than something like movement during the fmap acquisitions? I can check if we collected any other participants during this time frame, and if so, how their fmaps look.

Some of the wave1rea runs are currently running through fmriprep 21.0.1; I’ll share how it turns out when we have results.

1 Like

I marked @ofgulban’s response as the Solution because the strange fmaps seem to be the source of the problem. As a test I tried preprocessing (distorted) wave1Rea session runs with the (not-distorted) wave1Bas session runs, and the output looks much more reasonable. We still need to do more tests, but I’m optimistic that we might be able to salvage at least some of the session’s data by swapping out the fmaps.

… and we have now added “check the fieldmaps for large distortions” to our acquisition SOPs; it only takes a few seconds to pull them up on the scanner computer.

thanks again, @ofgulban and @Steven!

3 Likes

This is good to hear @jaetzel , thanks for the update. I am glad to be of help :slight_smile:

Good luck with the rest of the analyses :four_leaf_clover: