Assuming all pre-combination preprocessing is done identically, and --tree kundu is used with other default settings, I would like to confirm if the difference between Kundu’s MEICA v2.5 and the latest tedana is limited to the below steps:
(1) An adaptive mask is applied before data combination, with the threshold for each echo set as the signal of the voxel at the 33rd percentile in the shortest echo, divided by 3.
(2) PCA components are selected based on a moving average (stationary Gaussian) process (Li et al., 2007). Under default settings, no PCA component is retained based on kappa/rho (as is done in Kundu’s MEICA).
I’ll add that if you use the --tedpca kundu --tree meica options the core steps of the method should be extremely close, if not identical. The big difference is that both methods are based on ICA and the results of ICA will depend on the initial seed value, the specific algorithm version, and possibly system hardware. This is even an issue when comparing MEICA vs MEICA.
If you run MEICA v2.5 and tedana with the same ICA mixing matrix, the results should be identical or nearly identical (although I haven’t tried that in a long time)
That said, part of the reason tedana exists is because the MEICA code was, um, difficult to understand and build upon. MEICA contained multiple reasonable, but arbitrary algorithm choices. The tedana developers spent a lot of time trying to replicate the MEICA method as close as possible and plan to keep that approach as an executable option. If someone finds a divergences between MECIA & tedana, we can try to address them. The most recent example of that was Align with old meica by handwerkerd · Pull Request #952 · ME-ICA/tedana · GitHub where we realized we a small difference between what we called the kundu tree and what was done in MEICA v2.5 and fixed it (and changed that new tree’s name to meica. I think the current developers are more interested in improving the method rather than tracking down additional divergences, but someone finds a non-trivial divergence, please let us know.