Aggressive vs nonaggressive denoising

I read many posts on the difference between aggressive and non-aggressive denoising using ICA AROMA, but I’m still very confused.
I thought the difference refers to only whether we use regress out nuisance variables before the full model with variables of interest (aggressive) or add nuisance regressors to the full model (nonaggr). At least that is what @mmennes seems to imply here:

Regressing out WM and CSF prior to your 1st level model (without including any variables of interest) would be classified as aggressive denoising.

However in this jupyter notebook, nonagg denoising is done by regressing out confounds, which according to @mmennes definition would be classified as aggressive denoising, wouldn’t it?

if your design matrix only includes confound variables, with the idea of using the residuals as input for a next analysis, you are doing aggressive denoising.

if your design matrix includes confound variables AND variables of interest, you are doing non-aggressive denoising

1 Like

Thanks @mmennes. I think it’s clear to me what aggressive denoising is, but I’m still confused with non-aggressive denoising after AROMA.

When I run AROMA I have the option of choosing either aggressive or non-aggressive denoising (or both). The aggressive output is essentially the residuals from a nuisance regression on noise ICs, and I think the non-aggressive output is the linear regression with mixing matrix + noise ICs, is that right?

Extending this logic to CSF and WM regressors, the agressive denoising is just the residuals of the nuisance regression on the CSF and WM regressors. But how would non-aggressive denoising be implemented here? After getting the AROMA epi_nonaggressive.nii file, how would I go about doing non-aggressive denoising of CSF and WM regressors?

you would add the CSF and WM regressors to a design that also holds your variables of interest, e.g., task stimulus time series for a task analysis or seed-timeseries for a seed-based functional connectivity analysis.

The point is that you either let your noise variables soak up all the variance they ‘want’ (aggressive), or that you make sure the variance they share with good variables is properly modeled (and thus not removed) (non-aggressive).

2 Likes

Thanks for the helpful clarifications. I had some related questions on a different thread that might haven gotten buried (ICA AROMA agg vs non-agg):

  1. yes

  2. Stimulus-correlated head motion is the worst as you can’t disentangle artefact from signal. As you indicate non-aggressive denoising would likely not (fully) remove this type of artefact (matrix algebra does not know our intentions ;-)). This would alter your stats, but you can’t know whether it’s biased or not as you don’t know which portion is stimulus-related activity and which portion is motion-induced. This is where ICA-based denoising could have a potential benefit as ICA might be able to separate these two sources. In this case the inclusion of the motion parameters is not needed anymore as the ICA components (hopefully) provide more fine-grained regressors.

Thanks, Martin! I think we’re just going to try and make one confound matrix (including the cosineXX regressors for hp filtering) during the first-level analyses in FSL. Based on all the related threads (see links below), it seems like that would be the best way of accounting for shared variance and avoid re-introducing noise.

Within that confound matrix, we would include the cosineXX regressors, the non-steady state regressors, and CSF/WM regressors alongside the aromaXX regressors. I’d like to limit loss of tDoF, but I wonder if there’s any advantage in also including the 6RP (or even the 24RP) motion parameters as an added layer of protection again head motion? I realize that some of the aromaXX regressors should be correlated with the rotations and translations (and their expansions), but I don’t think any of the evaluation papers (e.g., yours or the Lydon-Staley et al., 2019, Network Neuroscience) included a pipeline with AROMA and motion regressors.

Best,
David

Links to related threads in case it’s helpful for others:









5 Likes