Acceptable tDOF loss for denoising

I’m curious to know what one would consider an acceptable loss of temporal degrees of freedom (tDOF) when deciding on a denoising pipeline for functional connectivity (either resting state or a continuous task–i.e no discrete blocks or events). This mainly concerns tDOF-loss from a large number of regressors, as discussed in Parkes et al and Ciric et al, rather than from removing timepoints via censoring approaches (see this excellent post discussing the concerns of tDOF-loss due to censoring).

I have a dataset, preprocessed with fmriprep, that has long runs (~30 min, ~900 TRs) and I’m in the process of determing the best denoising strategy. I’ve been mainly looking at common metrics reported in denoising studies (e.g., Parkes et al, Ciric et al), such as QC-FC and distance dependence. Of the strategies I’ve computed, an aCompCor50 strategy (i.e. top k components that explain 50% of the variance) that also includes 24 motion parameters does an excellent job: the low QC-FC is similar to strategies that include global signal regression (GSR), but unlike all GSR models, it has the lowest distance dependence. Modularity is also quite reasonable, and certainly better than less optimal approaches. And, of course, this approach avoids the GSR controversies as well.

The problem, however, is that the aCompCor50 strategy results in 200+ regressors (around 205 on average), and has an average tDOF loss of ~23%. It is true that a) the number of regressors will scale with the number of timepoints, and b) ~23% loss is much better than the 30-39% loss reported in Parkes et al, but 200+ seems huge.

I should say that I’m not doing any censoring that reduces the number of timepoints, so the actual sample size to compute the connectivity values (Pearson r) isn’t affected at all.

As discussed in Parkes et al, one key thing would be to check if individual/group differences in connectivity are attributable to the varying tDOF across subjects, which is easy enough to do when I get to the analyses.

One solution using optimized components is mentioned here, but it strikes me that this approach involves re-computing aCompCor components rather than adjusting the aCompCor regressors from fmriprep. I would prefer to use the latter.

Does anyone have any thoughts or similar experiences?

1 Like