How is nilearn.confounds.high_variance_confounds related to tCompCor?

Hello everyone,

While reading the code of nilearn.signals.high_variance_confounds, a method related to tCompcor, I noticed a few differences from the original method implemented in Behzadi et al. (2007) and wondered why they were added in Nilearn:

  • Constant and linear trends aren’t always removed in high_variance_confounds: this can be toggled using the detrend argument.
  • When detrend is True (the default), then a given percentile of voxels with highest variances is kept to extract confounds. However, voxel time series aren’t scaled to unit variance prior to computing the SVD, contrary to Behzadi et al. Is there a reason why this scaling was skipped in Nilearn?
  • When detrend is False, then the difference is bigger. First, voxels are kept based on their sum of squares, not their variance. Second, the SVD is computed on the uncentered matrix, which makes interpretation of the resulting confounds harder than with a centered matrix (i.e. principal axes are not axes of highest variance, but rather axes of maximal sum of squares or inertia). Does using detrend = False make sense?

Many thanks,

Samuel

Thx for discussing this.
I’m not aware of a good motivation for using detrend=False.
I think that in general, we should rely on the default behavior.
Best,
Bertrand

1 Like

Thank you for answering! The fact that detrend is an argument led me to believe that detrend=False could actually be useful. I’m OK with always using detrend=True!

However, is there any reason for the lack of unit-variance scaling compared to the original method from Behzadi et al., or is it just empirical?

Best,

Samuel