Tedana excludes a lot of voxels from adaptive mask -- problems with fieldmap?

Hi all,

We are using tedana to combine and denoise our multi-echo data following preprocessing with fmriprep. About halfway through our data collection we realised that for some of our subjects, the fieldmaps weren’t being applied correctly due to unwanted shimming between AP and PA fieldmaps. To address this, we performed the following:

a. run fmriprep and “forcing” the IntendedFor fields in both AP and PA fieldmap json files to contain all scans (and all echoes) so that distortion correction would be applied to every scan with any available fieldmap that had the same setting as the EPI scans (which would be the PA fieldmap for many subjects); and
b. run fmriprep without the fieldmaps (we used --use-syn-sdc flag to ignore fieldmaps), and instead use ANTs SyN-based susceptibility distortion correction.

We then ran tedana on data from both of the methods, and found that tedana seems to drop out a lot more voxels from its adaptive mask in subjects with a “forced” IntendedFor field (preprocessing with any available fieldmap), and even excluding huge chunks of the brain in the final mask (although the issue was mostly present in subjects with only one fieldmap available), which was not the case in the same subjects that were run using --use-syn-sdc.

We have a few questions regarding the use of fieldmap and the pipeline integrating fmriprep and tedana:
(1) Does only using PA fieldmap (instead of both AP and PA) explain why tedana treats many voxels in the distortion-corrected scans as noise?

(2) related to question 1 above, we also wonder if it’s better to just go with the SyN-based distortion correction as opposed to only using the PA fieldmap? (we assumed that having both AP and PA fieldmaps for distortion correction would be optimal in this case).

(3) Given that we are taking the pre-processed single-echo EPI images from fmriprep, and then using tedana to denoise and combine these multi-echo images, we may need to manually perform the subsequent registration steps. We were wondering whether it would make sense to use antsApplyTransforms to transform our data from scanner space into T1 and MNI space. We plan to use the following command:

antsApplyTransforms \
  -d 3 \
  -e 3 \
  -i “desc-denoised_bold.nii.gz” \ # tedana optimally combined image, scanner space
  -r “sub-XX_task-YY_run-Z_space-T1w_boldref.nii.gz” \ # fmriprep T1-aligned reference to resample to
  -o “_desc-denoised_bold_T1w.nii.gz” \ # output image
  -n LanczosWindowedSinc \
  -t “sub-XX_task-YY_run-Z_from-scanner_to-T1w_mode-image_xfm.txt” \ # see note*
  --float \
  -v

*= transformation matrix that maps scanner space to T1w space, generated by fmriprep.

We would like any feedback or suggestions on how to best integrate tedana with fmriprep when preprocessing multi-echo images!
Thanks in advance!

Hi Nikita

First, some of the masking methods have changed and I’d recommend using tedana version 24.0.1 or later.

Some documentation about masking in tedana is here: FAQ — tedana 24.0.2 documentation

We recommend you provide your own initial mask to tedana using the --mask option and then the adaptive mask is applied within tedana. It sounds like this is what you are already doing. The adaptive mask method in tedana should be fairly conservative. By default it find the 33rd percentile magnitude voxel in each echo and masks voxels less than 1/3 that magnitude. In practice, that means it will mask out voxels that look black relative to the other voxels within the mask.

There is another adaptive mask option you can try --masktype decay which will drop voxels were later echoes have a larger magnitude than earlier echoes.

More info on the math underneath each mask is here: tedana.utils.make_adaptive_mask — tedana 24.0.2 documentation

Overall, I cannot give you advice on your different pipeline choices, because a lot of this is dataset dependent, but I’d recommend looking at the data for each echo that’s being input into tedana. If the voxels that are removed by the adaptive mask look like they don’t have meaningful signal, then the problem lies within the data or processing choices. If it looks like the adaptive mask is removing voxels that should have been retained, we can try to tweak the arbitrary thresholds.

Hope this helps

Dan