Tedana component selection

Hi all,

I’m wondering if there’s anywhere I can find an explicit description of the component selection algorithm used by tedana (v0.0.10). The documentation describes this as “the Kundu decision tree v2.5” and references Kundu et al. (2013), “Integrated strategy for improving functional connectivity mapping using multiecho fMRI.” However, looking at the SI methods of this paper, the description of the decision tree (end of page 1) just describes kappa and rho thresholds determined by elbow finding, and doesn’t seem to account for all of the TEDICA rejection codes mentioned in the tedana documentation.

For instance, I003 is described as “more significant voxels in S0 model than R2 model,” but voxel counts aren’t mentioned by the Kundu paper at all. I009-010 describe “mid-kappa artifacts” - what does this refer to?

Thanks,

Ben

I hope the flow charts listed in this comment will be helpful: Creating a flowchart for visualizing tedana · Issue #355 · ME-ICA/tedana · GitHub
This PR [REF] Decision tree modularization by tsalo · Pull Request #592 · ME-ICA/tedana · GitHub should significantly clarify that part of the code, but it’s still a work-in-progress (hopefully to finish this Summer).
FWIW, the paper is not particularly clear on all the underlying details of the component selection algorithm. Things like the significant voxel count measure were always in the code, but not well-described in the manuscript. Several of us are working on both making the existing process clearer and hopefully improving it.
Happy to answer any additional questions.
–Dan

Thanks, that’s helpful. The Kundu paper describes removing components that are somewhat low kappa and outliers in variance explained / percent signal change. Are rejection codes I007, I009, and I010 all intended to identify these components?

Pretty much. If you look at the comparison metrics flowchart, you’ll see that the D table is the average ranking of 5 different metrics. The rejection criteria that use D table are effectively trying to find components that are relatively bad (compared to other components) in all 5 of these metrics. I strongly suspect that there are several, slightly different rejected steps, because each was added to deal with a distinct edge case. I’ve also seen some of these cause problems & reject components that should have been kept, which is why I recommend always looking at the reports to make sure nothing looks weird.

One of my plans, which will happen when PR #592 is merged is to have a conservative decision tree that excludes all of these extra reasons to reject components, as well as a flexible structure so that anyone can tweak the decision tree without editing the actual code.

1 Like