I have a pipeline where data is preprocessed by fmriprep and is extracted for analysis using nilearn’s Nifti Masker objects. Often times, the data are used for connectivity analysis; typical confound regression is performed using confounds produced from fmriprep, along with standardization, temporal filtering, and detrending.

I’ve looked at recent papers (e.g., Arbabshirani et al (2014), Bright et al (2017), Afyouni et al (2019), Honari et al (2019); Olszowy et al (2019) for recent analysis in task-based fMRI) discussing the impact of high autocorrelations in fMRI data, and relatedly, pre-whitening data and the confound regressors.

My question mainly concerns the confound regressors. Can I continue to use nilearn for confound regression, or should I be switching over to something like AFNI’s `3dREMLfit`

or FSL’s `film_gls`

/ `fsl_glm`

? Looking into `nilearn.signal.clean`

, I can’t seem to find anything (but I could be missing something).

I’m also just curious as to what others do when it comes to pre-whitening (either the fmri data itself or the confounds regressors), and what additional perspectives are there to consider?

Thanks

Pre-whitening is important to ensure that the parametric tests are correct. The reason is that they rely on an estimate of the noise degrees of freedom that are altered by temporal correlations. Prewhitening removes these correlations hence ensures the correctness of parametric tests.

Now, for resting-state fMRI, this may not be useful at all. It would for instance be useful if you wanted to assess that the observed correlation between two region times series is greater than 0 with a parametric test — for the reasons mentioned above— but this is probably not what you want to do.

My 2c,

Thanks for your input! Regarding parametric tests, wouldn’t a simple correlation between two timeseries be considered parametric? In that case, I would think that estimating the correlation between timeseries itself would benefit from pre-whitening. Or, related to my question, wouldn’t pre-whitening data + regressors during nuisance regression yield more accurate residuals (‘denoised’ data)?

In my viw, the “parametric” part comes into play when you tr to attach some statistical significance to the correlation value. The whitened estimate has arguably better properties as the non-whitened one but afaik most people don’t do it because:

- when autocorrelation is not too high, the potential improvement brought by whitening is minor
- it is a bit disturbing to apply correlation analysis to signals that have been whitened with different filters, because their initial autocorrelation was different.
- Neural activity of interest is thought to be slow, hence auto-correlated. whitening may hurt then. The theory we’re talking about deals with autocorrelated noise, but with RS it is not clear what is signal and what is noise.
- the whitening filter has to be estimated from data which adds to the analysis pipeline.

HTH

Ah, that all makes sense and helps clarify some of my concerns as to why it’s uncommon (and hence not implemented in nilearn). Thanks for your input!