While reading the code of
nilearn.signals.high_variance_confounds, a method related to tCompcor, I noticed a few differences from the original method implemented in Behzadi et al. (2007) and wondered why they were added in Nilearn:
- Constant and linear trends aren’t always removed in
high_variance_confounds: this can be toggled using the
True(the default), then a given percentile of voxels with highest variances is kept to extract confounds. However, voxel time series aren’t scaled to unit variance prior to computing the SVD, contrary to Behzadi et al. Is there a reason why this scaling was skipped in Nilearn?
False, then the difference is bigger. First, voxels are kept based on their sum of squares, not their variance. Second, the SVD is computed on the uncentered matrix, which makes interpretation of the resulting confounds harder than with a centered matrix (i.e. principal axes are not axes of highest variance, but rather axes of maximal sum of squares or inertia). Does using
detrend = Falsemake sense?