Hi @sinclair_allie and @Mkassaie,I’m not an expert here, but I’ll try and clear things up.
There are two main methods for removing the effect of low-frequency drift in your data in a task-fMRI study:
-
Pre-filtering - removing low frequency signals before model fitting: This is where, prior to performing your model fitting, you apply a filter to your data set (for example using
fslmaths -bptf). You then use this pre-filtered data in your model fitting. With this approach, you should apply the same pre-filtering to the explanatory variables in your design matrix - as all of the low-frequency signal has been removed from your data, there is no point in having any low-frequency signal in your EVs. High-pass filters can also affect the beginning and end of a signal, e.g. introducing a “roll-off” or “dampening” of the signal in the first and last few seconds of a signal. Applying the same filter to your data and to your model ensures that they will both have roughly the same temporal characteristics. -
Modelling low frequency drifts: This is where you don’t perform prefiltering at all, but instead use some process to estimate the low frequency signals that are present in your data. You then add those estimates as regressors into your design matrix. In this case you do not need to apply any filtering to your other explanatory variables - any variance in your data which correlates with the low frequency regressors will be assigned to them, and any remaining variance will be assigned to your other EVs.
(note that I’m using the terms regressor and EV somewhat interchangeably, to refer to columns in your design matrix).
As I’m not an expert, I can’t comment on which approach is preferable; there are probably situations in which you may want to choose one or the other. For example, the benefits to the second approach are reducing the amount of preprocessing/manipulation of your data (which we usually want to do as little of as possible).
edit There is a good (but detailed) overview of the second approach here