I was working on a Python implementation of Powers et al. (2014) for a dataset that my team is considering using censoring on (due to presence of disparate motion spikes). However it looks like Powers et al. performs temporal filtering after nuisance regression which as I understand is problematic (Lindquist et al. 2018).

I’m curious to know whether the neuroimaging community has come up with the ability to satisfy Lindquist’s orthogonalization/simultaneous cleaning approach while also being able to incorporate Power’s scrubbing?

If not, would the following proposed approach satisfy both Linquist and Powers?

Censor the motion regressors + time-series

Perform frequency-based interpolation of censored time-points for both motion regressors and time-series

Perform temporal filtering of both motion regressors and time-series (achieving orthogonalization)

Perform nuisance regression

Re-implement censor masks for future FC-based analysis

My concern is that moving frequency interpolation prior to nuisance regression may cause unintended artifacts - I’m not knowledgeable enough to know what those could be.

If not, is the recommendation to wait for nilearn to sort out DCT-based filtering first (issue below):

I think fMRIPrep does this correctly. It will store scrubbing motion outliers as one-hot encoded vector to be regressed out at the same time as other confounds. To account for the high-pass filtering of the design matrix (useful for aCompCor for example) , fMRIPrep saves out cosine regressors which should be regressed along side compcor components. Not sure if this answers your question or not, but hope it helps!

Thanks for answering so quickly, I’m glad there’s a simple solution! I’m noticing that fMRIPrep generates basis functions for only low frequency components, I’d like to perform band-pass (0.009 - 0.08)

However, i should be able to work with this and generate my own high frequency basis functions to regress out of the signal w/nilearn clean_img! The one-hot encoding trick for time-points to censor across all voxels makes sense as an approach as well.

As it turns out, I coded a script today that extracts confounds as suggested by Parkes et al., 2018. Keeping in mind that you will never want to use all of the confounds in an fMRIPrep confound.tsv file, the script does the following:

Extract 24 head motion parameters

x y and z rotation and translation, their derivatives, and all of those squared

Extract global signal (and it is easy to add code to extract it’s derivative and squared terms as desired)

Get aCompCor in one of a variety of ways

functionality for either top 10 components or enough components to explain 50% variance

Can choose between using separate WM and CSF mask or use the combined mask

Also extracts the cosine regressors to account for high-pass filtering

High motion scrubbing based on an adjustable framewise displacement threshold

Can do either basic scrubbing or “optimized scrubbing” (see Power et al., 2014)

Saves out as one-hot encoded vectors, similar to the motion outliers already in the confounds file

If this is something you (or anyone else) would be interested in, let me know and I can send it your way!

Also, I forgot to ask this before, but it is important to note that the recommendations from the two papers I mentioned are for resting state, not task.

It was my understanding that you wouldn’t do the bandpass separately from nuisance regression since it would re-introduce motion correlations into your data (Lindquist et al. 2018).

Their recommendations were either to build the filter into the regression model (i.e use cosine basis/fourier basis functions in the regression model) or to pre-filter the motion regressors and time-series, and then perform nuisance regression - knowing that your motion regressors won’t leak in unwanted frequencies back into your data

My original proposal was to tackle the pre-filtering of motion regressors while still being able to use the “optimized censoring” approach of Powers - ideally avoiding smearing motion spike effects when applying the initial filtering (since those bad points would be replaced by interpolations based on frequency content)

That being said, just building in cosine regressors of unwanted frequencies into the regression model as you’ve initially suggested would satisfy Lindquist’s crtique!

EDIT: Yes, i should have mentioned but this is for rest data!

I think i’ve run into a bit of a problem with implementation unfortunately. I’m able to get okay signal correspondence between my temporal filtered vs linearly filtered (via DCT regression):

Here “temporal” refers to nilearn’s built in bandpass embedded into signal.clean. “dct” refers to constructing the N cosine basis functions and including only regressors that are outside the bandpass window (0.009,0.08). Unfortunately, I seem to have hit a weird artifact induced by this filtering as seen in the standard deviation over voxels per time-point plot below: