In our study, we used SPM to apply the high‑pass filter for fMRI data preprocessing. The task employed a block design, where the duration of each block varied across participants depending on their reaction times. The maximum block lengths across participants were as follows:
Inclusion 1 + Questionnaire 1: 542 seconds
Inclusion 2 + Questionnaire 2: 532 seconds
Exclusion 1 + Questionnaire 3: 487 seconds
Exclusion 2 + Questionnaire 1: 481 seconds
Based on these durations, we conservatively estimated the design frequency as 1/542 Hz and re‑ran the analyses using this estimated high‑pass filter to verify the robustness of our results.
However, one reviewer commented that the high‑pass filter should generally be set much higher—approximately four times the length we used—and suggested that applying a simple linear trend might be a more practical approach, possibly in combination with nuisance regressors to model low‑frequency noise.
Could you please advise how we should proceed in response to this comment? Is the reviewer’s reasoning correct in this context?
Yes, conventionally a lot of fMRI studies use high pass filters of 128 seconds (.008 Hz) or 100 seconds (.01 Hz).
Filtering in combination with nuisance regressors is important. This is because if you use something like mean WM/CSF signal in your model to correct for noise, those terms have the same signal drift embedded in them. If you filter your data before nuisance regression, and you use a nuisance regressor that has the same drift you just corrected, you can unintentionally reintroduce the signal drift.
Software like fMRIPrep make this process easy by making discrete cosine regressors that act as a 128 second filter. These can be directly included in your first level model.
Thank you for your prompt reply. Since this dataset was processed using SPM and the paper is currently under minor revision, I’d prefer not to make major changes to the processing pipeline or software at this stage.
To address the reviewer’s concern, do you think using a high‑pass filter of 542 s, considering the duration of the task, would be appropriate? I plan to apply detrending and nuisance regression after filtering in the first level analysis.
I do not think it is typical to both detrend and high pass filter. What is your reasoning in setting the high pass filter cutoff based on the task length instead of a more conventional standard value?
This task named Cyberball was a block design task, and the duration of the blocks varies across individual participants, depending on their reaction times. The maximum length of each block across participant was:
o Inclusion 1 + Questionnaire 1: 542 seconds
o Inclusion 2 + Questionnaire 2: 532 seconds
o Exclusion 1 + Questionnaire 3: 487 seconds
o Exclusion 2 + Questionnaire 1: 481 seconds
Based on this, we conservatively considered the frequency of the design as 1/542 Hz and reran the analyses using this estimated high-pass filter frequency to verify the results.
The high pass filter is meant to control for gradual signal buildup due to technical reasons that are independent of task considerations, so I still do not see why you would time a filter based on task length.