Despiking vs scrubbing

What is the difference between scrubbing and despiking? I thought they referred to the same thing but it seems that scrubbing is generally not recommended whereas despiking seems to be usually done.

3 Likes

Despiking is a process in which large spikes in the fMRI times series are truncated. That is, imagine you had a spike that was +4 STDs above the mean. The despite might cap it (by reducing the vie at that point) at level of 2 STDs. This limits the magnitude of data spikes. There is still “bad” data there, but the amplitude is reduced and so the impact on future analysis is minimized to an extent.

Scrubbing it’s a process in which TRs with excessive motion or signal deviation are identified and then removed from the data. This can be very effective, maybe, at removing spurious sources of connectivity in fMRI data. But it must be applied carefully, as it can have unintended consequences. For example, if you remove TRs via scrubbing, and then apply a filter, the filter will not actually work correctly on the data because the filter has a temporal dependency.

Likewise, if you are doing a task based fmri analysis, data scrubbing of TRs may be inappropriate. Scrubbing/Removj g trials which have high motion or high signal deviants TRs maybe more appropriate.

It has been argued (sorry I don’t have the citation with me) that one of the main benefits of data scrubbing it’s actually to force you to throw away participants who have excessive motion, because too many of their TRs have been scrubbed.

So, despiking modifies but retains all the data, while TR scrubbing removes some of the data. Despiking may be safer, and have less unintended consequences. TR scrubbing maybe is a more effective method to remove spurious connectivity, but maybe not. The jury is still out.

3 Likes

Another interesting detail is that despiking treats each voxel independently. At every TR a different number of boxes can be modified to remove spikes. More info about despiking https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDespike.html

there is a third option, censoring https://www.ncbi.nlm.nih.gov/pubmed/23861343 the identified outliers are modelled as separate regressors, could be used with despiking - i like it for tasks fMRI

1 Like

One feature (advantage?) of despiking voxel-wise over censoring/scrubbing comes from the way that FMRI task-based regression is usually implemented – using the same modeling matrix for each voxel timeseries. Despiking preserves time points in the data (just munges the data around), whereas censoring removes them (one way or another). Using the same matrix for each voxel limits the number of time points that can be censored, so choosing to censor cannot be made on a voxel-wise basis only (otherwise one could easily end up censoring out almost every time point due to SOME voxel being spiky at that instant). Despiking preserves the time point in the data.

In afni_proc.py, “outliers” (similar to “spikes”, unusually large/small values) are counted in each volume BEFORE any further processing. Despiking, if ordered, is done next. Later, the outlier count per volume (time point) is used to decide if that time point should be censored. (Large motions also cause censoring.) So the net effect of despiking is to reduce outliers in volumes that don’t end up being censored out. The outliers in volumes that are destined to be censored are also despiked, but that won’t matter much since those volumes will eventually be cast aside.

Thank you all for your answers. To my limited understanding, it would be better not modify (despike) or delete (scrubbing) data, but account for usually large spikes or TRs with excessive motion in a regression model. In this post Remi Patriat suggests doing exactly that. Is it possible to do this with the ArtRepair toolbox or ArtifactDetect (rapidart) algorithm? I suppose despiking occurs at the voxel level and for that reason I cannot add regressors to the GLM to identify problematic spikes, is that correct?

@Chris, that is very interesting and perhaps has similarities with Patel et al.'s 2014 wavelet-based algorithm, which also applies despiking to each voxel independently?

1 Like

I don’t completely agree… Yes despiking is better as it is done voxelwise, but with censoring while applying the same everywhere, coefficients vary so you still have different weighting in space. I personally don’t like scrubbing (removing images) and screwing up the time series.

Cyril

ArtRepair toolbox does scrubbing or substitute an image by mean of neighbours – despiking is quite good and you can still add censoring (one regressor per image you know you have arfifacts)

Inserting a regression matrix column that is 1 at exactly 1 time point and 0 otherwise is equivalent to deleting that time point from the data and deleting the corresponding row from the (unmodified) regression matrix. This can be seen by symbolic Gaussian elimination of that time index first in the process.

Whether “screwing” up the time series is an issue depends on what further processing you want to do with it. For example, the “usual suspects” for estimating autoregressive model parameters assumes even time spacing, so deleting a time point messes up those algorithms – that’s where despiking would be preferred, or perhaps censoring-via-regression. Of course, one can use an algorithm that doesn’t require even time spacing (that’s what AFNI does in 3dREMLfit, for example).

Thanks @BobCox, I didn’t know that adding regressors with ones and zeros would amount to deleting the whole TR. I’m very surprised because I think it is common practice to insert regressors that way to account for unusual TRs and thinking that it is different from simply deleting them. I actually thought your suggestion of “censoring-via-regression” would be equivalent to including regressors for each scan. How is it different?