QC Tedana's Output

Hi Everyone,

I am working with multi echo resting state fMRI data. So far, I have preprocessed the images using fMRIPrep and respectively tedana to denoise the data. Further, I have transformed tedana’s output from native to standard space for my further analysis. My question is regarding the QC of the processed images (in native sapce): I have not found clear guidelines on steps to consider for the QC of tedana’s output. From the papers I read (e.g., Kunu 2013, and Roni Setton 2023) , I came across the following steps to be considered for the QC.

  1. Assess whether DVARS and FD are more decoupled after denoising (optimally combined data from tedana vs fmriprep preproc outpout or the high kappa vs low kappa data).
  2. Assess whether tSNR>50
  3. Remove participants with less than 10 bold components identified in tedana (considering that aic was chosen for tedpca in tedana).
  4. FD>0.50 coupled with denoised timeseries showing DVARS>1.

To address 1 and 3:

I used “fsl_motion_outliers” to compute DVARS and FD from the outputed image by fMRIPrep (space-MNI152NLin2009cAsym_res-2_desc-preproc_bold.nii.gz) and tedana (space-Native_desc-optcom_bold.nii.gz) to see how these two parameters are changed afterwards. When computing DVARS, I have the following issues which I highly appreciate clarification on them:
The command fsl_motion_outliers applies motion correction while fmriprep output is motion corrected (if I am not mistaken fMRIPrep applies motion correction while estimating confounds for further corrections). Accordingly, I both remove and include the –nomoco flag to apply or prevent fsl from applying the motion correction (which preventing the correction results in negative DVARS values). Also, the amplitude of the data DVARS computed using the fsl command has a different scale than the one estimated from fMRIPrep (in the fsl website they mention that “the dvars metric is scaled to approximately match what is done in Power et al. - dividing by the median brain intensity and then multiplying by 1000” . Also, regardless of the scale, when I plot the DVARS for the fMRIPrep ouput and the denoised tedana (optimally combined), tedana’s dnoised data has almost the same DVARS as the fMRIPrep output’s!!

In summary:

  1. Does fMRIPrep apply motion correction or it estimates the motion parameters and provides them as confounds (should I include motion correction in my fsl command to estimate DVARS)?.
  2. Does fMRIPrep use fsl to estimate FD and DVARS (to ensure how to address the scale difference between the DVARS resported by fMRIPrep and the one estimated by fsl)?
  3. Does the fact that my tedana image is in the native space and the fMRIPrep is in the standard space has an effect on comparability of the estimated DVARS?

To address the tSNR QC step:

I computed the tSNR maps using fslmaths to estimate Tstd and Tmean and dividning them by eachother. I am not sure how I should address the tSNR>50 proposed in the papers? should I create a GM mask and average the tSNR image values within that mask?

Lastly, how should I check the suggetsed step from setton 2023, “FD>0.50 coupled with denoised timeseries showing DVARS>1”, is it the average of the FD and DVARS per participants estimated and checked with the criteria, would you have any suggestions on that?

Many thanks in advance,
Ali

Command used:

FD/DVARS code:

fsl_motion_outliers -i /lustre04/scratch/javan/ME_Feb2024/wave2_final/derivatives/sub-xxSUBJECT_IDxx/sub-xxSUBJECT_IDxx/ses-xxSESSIONxx/func/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx_task-rest_run-01_space-MNI152NLin2009cAsym_res-2_desc-preproc_bold.nii.gz -o dvars_motion_outliers_fprep_xxSUBJECT_IDxx.tsv --dvars  -s dvars_values_fprep_xxSUBJECT_IDxx.tsv -p dvars_fprep_xxSUBJECT_IDxx.png --dummy=4 -v --thresh=30

fsl_motion_outliers -i /lustre04/scratch/javan/ME_Feb2024/wave2_final/derivatives/tedana/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx_task-_space-Native_desc-optcom_bold.nii.gz -o dvars_motion_outliers_tedana_xxSUBJECT_IDxx.tsv --dvars  -s dvars_values_tedana_xxSUBJECT_IDxx.tsv -p dvars_tedana_xxSUBJECT_IDxx.png --dummy=4 -v --thresh=30
tSNR code:

fslmaths /lustre04/scratch/javan/ME_Feb2024/wave2_final/derivatives/tedana/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx_task-_space-Native_desc-optcom_bold.nii.gz -Tmean /lustre04/scratch/javan/ME_Feb2024/wave2_final/FD_DVARS/tSNR/denoised/sub-xxSUBJECT_IDxx/mean_image_xxSUBJECT_IDxx.nii.gz
fslmaths /lustre04/scratch/javan/ME_Feb2024/wave2_final/derivatives/tedana/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx/sub-xxSUBJECT_IDxx_ses-xxSESSIONxx_task-_space-Native_desc-optcom_bold.nii.gz -Tstd /lustre04/scratch/javan/ME_Feb2024/wave2_final/FD_DVARS/tSNR/denoised/sub-xxSUBJECT_IDxx/std_image_xxSUBJECT_IDxx.nii.gz
fslmaths /lustre04/scratch/javan/ME_Feb2024/wave2_final/FD_DVARS/tSNR/denoised/sub-xxSUBJECT_IDxx/mean_image_xxSUBJECT_IDxx.nii.gz -div /lustre04/scratch/javan/ME_Feb2024/wave2_final/FD_DVARS/tSNR/denoised/sub-xxSUBJECT_IDxx/std_image_xxSUBJECT_IDxx.nii.gz /lustre04/scratch/javan/ME_Feb2024/wave2_final/FD_DVARS/tSNR/denoised/sub-xxSUBJECT_IDxx/tsnrxxSUBJECT_IDxx.nii.gz

Version: fMRIPrep v23.2.0, tedana v0.0.12

Environment ( Singularity )

Data formatted according to a validatable standard?

Data is BIDS Validated

I can’t answer all your questions because I’m not an fMRIPrep user, but I can help with the tedana part. First, I see you’re using tedana v0.0.12. That version was released in 2022. I’d recommend updating to the current version (v24.0.1). We’ve made a lot of changes including bug fixes and improvements to tedana’s QC report and outputs of additional information that might help with your QC goals.

Explanations of the figures in tedana report and how to use that information for QC is at: Outputs of tedana — tedana 24.0.1 documentation

On some of your more specific questions:

  • I agree that only 10 accepted components is low, but a “good number” might vary depending on the quality and number of volumes in your dataset. I’d suggest first looking at the total number of components. If there are nearly as many components as time points or the total number of components is <1/5 of the number of time points, that’s a known problem we are currently trying to address. The right number of accepted components is trickier because it will vary based on data quality and runs with more structured noise might have fewer accepted components. This is where our interactive reports are useful in that you can look at rejected components to check if they are plausibly rejected.
  • TNSR post tedana has limited use. If tedana rejects components, it will reduce the total variance of the data and will always increase TSNR. It’s might be nice to know how much TSNR changes, but it will always go up. If you have task data, you can pick a regions with known response (i.e. primary visual and motor cortices in a visuomotor task) and check if tedana inprove the F or T statistics in those areas.
  • With or without tedana, an ideal TSNR threshold depends a lot on your other acquisition parameters and study goals. TSNR=>50 is an ok metric for many fMRI studies, but I’d look more carefully at how much TSNR variance across brain regions and subjects. If you’re getting TSNR>100 in most subjects and one subject is 70, that’s a warning sign.

I might as well self-promote a bit and reference an article I wrote on QC that outlines my perspective a bit more generally (including a list of questions to answer in the appendix ( Frontiers | The art and science of using quality control to understand and improve fMRI data ). That’s part of a special issue on QC which includes perspectives from many groups.

I hope this helps.

Dan