I have processed multi-echo resting-state fMRI images (three echoes per scan) using fMRIPrep (V.24.1.0) on singularity. To remove dummy volumes and perform further denoising, I applied the following steps on fmriprep outputs:
(1) removed the dummy volumes from the echo-wise files, (2) ran tedana (V 24.0.2) on those trimmed files, (3) removed the dummy volumes from the MNI-space optimally combined file from fMRIPrep, and (4) used tedana ICA components to denoise the trimmed MNI-space file as discussed and suggested on a (former topic on Neurostarts). I took the non-aggressive approach with no confounds (just the rejected components) to apply denoising on the data using Nilearn. After denoising, I extracted the timeseries from the Schaefer 7-200 parcellation and computed the corresponding functional connectivity matrices. However, the resulting connectivity matrices exhibit unusually high values, which seem unexpected to me. I’d appreciate any insights on whether such high connectivity values are typical given this type of sequence and denoising approach. If they are expected, are they considered acceptable? If not, what strategies could be used to address this issue?
Attached below are the codes used for these steps, including the application of Tedana on fMRIPrep outputs to obtain the main mixing matrices necessary for further image denoising: tedana_wave2.txt (3.7 KB)
Code for applying tedana denoising using nilearn: denoise_nonaggr_MTL0002.txt (2.8 KB)
Code to extract connectivity matrices from the Schaefer parcellations: connectivity_schaefer200_wave2_MTL0002_FU84.txt (2.1 KB)
Here is the connectivity matrix csv file: connectivity_nonaggr_schaefer200_MTL0002_FU84.txt (978.4 KB)
The mean connectivity value of the whole matrix is 0.42, and here is the plotted functional connectivity matrix.
The following image is the plotted connectivity matrix in matlab.
tedana should remove noise, but there’s no expectation that it will remove all noise sources. There’s a good chance it won’t remove artifacts with BOLD weighting. For example, if your volunteer was breathing slowly and deeply or doing spontaneous breath holds, that could create a global artifact that’s BOLD weighted.
It’s unclear if this is an issue with a subset of runs or with your entire dataset. If it’s a subset of runs, I recommend looking at the *_tedana_report.html file. A guide on how to interpret it is here: Outputs of tedana — tedana 24.0.2 documentation Given you’re showing a large global artifact, I’d expect at least one relatively high variance component that was accepted, but where the time series and/or component map looks like an artifact. If this is happening in just a couple of runs, you can use ica_reclassify to fix these problems. (Ideally make a protocol for looking at the reports and consistent rules for when to change classifications).
If this is happening on many runs, you look at the reports, and see consistent characteristics of bad components that were accepted, it might be possible to tweak the accept/reject decision tree to address this issue. (It could also be a sign of an underlying data problem)
Yes, this pattern of high connectivity values were prominent across several randomly selecetd runs connectivity matrices.
Looking at the reports of those individuals, apparently no high varaiance accepted component existed with a similar profile of an artifact. Please find a copy of the Kappa/rho and the explained variance figure below for your reference and to ensure my interpretation. Also, I have not noticed this scenario in other visualized reports either, so I would say there is not a misjudgement on component classifications that could doubt the decision tree selection.
Would you think that integrating the noissance regressors (motion, WM, and CSF) with perhaps the Demo external regressors single model decision tree could be beneficial in this regard? And if so, since the decision tree have not yet been validated, is there any special checks to ensure its performance on our data? Additionally, how different would it be to use a decision tree that takes into account the regressors with using the default decision tree and including those regressors along with the rejected components as confounds when applying the ica denosing from the already estimated tedana components?
Looking at your report, one thing I notice is that 80-85% of your explained variance is rejected. This isn’t implausible, but it’s on the higher end of plausible. Compare this to some of your other runs. Perhaps this run is just particularly noisy and that makes it harder to isolate relevant signal?
Looking at your fMRIPrep report, I do see systematic DVARS fluctuations (maybe at breathing rate?) and some spikes that appear in the carpet plot, so there clearly is global noise.
In the scatter plot, I see a bunch of green dots with rho values about the rho threshold and kappa values below the kappa threshold (dashed lines). Those are the ones that are most likely to be incorrectly accepted & can merit further inspection. If you run tedana with --tree minimal most or all of those will be rejected. It might be worth trying that and comparing results.
The external regressors functionality works, but the benefits of the included decision tree have not been systematically evaluated yet. I very much welcome people testing it out. The underlying logic is to reject components that significantly correlate with external regressors and the external regressors also model at least 50% of the variance of the component’s time series. I consider this a reasonable hypothesis for a useful rejection criterion, but it really needs testing for a range of regressors & data sets.
That said, if you give tedana external regressors as input, you can apply classification tags to components even if you don’t change their classifications. That means, you’d be able to inspect the components in the report and the hover text will show which ones significantly correlate with motion, respiration, CSF, etc.
Also, several groups have been evaluating external regressors, they’ll be giving a brief presentation at a multi-echo fMRI zoom meeting on Friday, March 14, 2025. More info here: https://groups.google.com/g/tedana-newsletter/c/gmkvjP1MIyU
Thank you for your comments, I have tested the suggested potential contributing factors and here are the details:
I checked the reports of several runs and participants, similar pattern of the low explained variance by BOLD components was prominent to the similar extend across all.
As you expected, using --tree minimal, removed all the suspecious components and the rejected/accepeted components’ rho/kappa values lied within the expected ranges with regards to the rho/kappa thresholds. You may find tedana’s report attached below, and the code here: tedana_wave2_tree_minimal.txt (3.6 KB)
Further, I denosied the trimmed (dummy removed) MNI-space optimally combined image from fMRIPrep, using “denoising with component” of tedana (applying the mixing matrix obtained from tedana to the image) denoise_nonaggr_MTL0002_NoConfounds.txt (3.1 KB)
The recomputed the connectivity matrix from this step can be found below:
Further, I included confounds when denoising with components. denoise_nonaggr_MTL0002_confounds.txt (3.1 KB)
Here is the connectivity matrix obtained from the denoised image:
Another tested step I performed was to keep the decision tree as default, but including confounds and global signal. Below are the results of the process: