Denoising: before or during GLM?

Hi everyone,

I have a list of confounds (generated by fmriprep), and I was wondering if there is any substantial difference in the following approaches:

  1. regressing some of them out before the GLM (using nilearn.image.clean_img), and then running the GLM in SPM on the denoised data
  2. running the GLM on the fmriprep functionals, and include the confound as nuisance regressors in the model

side question: after the GLM I am going to perform an MVPA analysis at the subject level, so the best approach would be (ideally) the one preserving the spatial distribution of active voxels. Would a 24HMP, 8 phisiological parameters or aCompCorr, SpikeReg denoising be too aggressive for this specific case? if yes, what would you suggest?


1 Like


I am also trying to figure out denoising, so unfortunately I cannot recommend something strictly, but I ran into Linden Parkes et. al. (2018) (An evaluation of the efficacy, reliability, and sensitivity of motion correction strategies for resting-state functional MRI) paper and thought it may help you too.

As for the second question, there is a good paper that may help, Lindquist et. al. (2018) Modular preprocessing pipelines can reintroduce artifacts into fMRI data. There, the authors tackle the issue of what happens when preprocessing steps are applied sequentially.

Good luck!

Thanks, Ana.

I’ll have a look at the first paper you suggested.

Regarding the second paper: I am applying all the denoising steps in the same line of code, so I suppose that it should not considered a sequential approach (I guess?).

Thanks for the info!

I guess it depends on the one-liner you use. I am using nilearn’s clean_img and as far as I understand, some of the steps are sequential despite it being written in one line only. I’m still looking into this though…

1 Like

Good to know! I’ll also have a look into this.


I would say you should implement option 2. What happens with option 1 is that your signal is projected in a subspace, so it has lost degrees of freedom, but the GLM does not know that. So it will improperly infer the variance of the residuals. Another issue is that whatever signal of interest you are regressing does not live in the same subspace as your denoised time series, which will again interfere with the regression. This is indeed similar to the problem of sequential denoising pointed out by @AnaTomomi

EDIT: I forgot to point out this recent paper on evaluating preprocessing strategy on task & rest data
Mascali, D, Moraschi, M, DiNuzzo, M, et al. Evaluation of denoising strategies for task‐based functional connectivity: Equalizing residual motion artifacts between rest and cognitively demanding tasks. Hum Brain Mapp. 2020; 1– 24.

With the caveat that the authors did not use fmriprep or a diverse set of data so it’s unclear their conclusions will apply to your context.

1 Like

Thanks for the advice. I will definitely have a look at the paper you linked!