Hello,
I have preprocessed my resting state fMRI images with fMRIPrep and following that, I am using confound regression and scrubbing with Nilearn
(load_confound, NiftiMasker, and masker.fit_transform
). According to Parkes et al., 2018, subjects are usually discarded if they are left with less than 4 mins of data after scrubbing.
My question is that: we have two consecutive runs of resting fMRI each 150 volumes, is it ok if overall (concatenated for two runs) it remains > 4mins (120 volumes) of uncontaminated data after scrubbing ( considering the fact that I will concatenate the two runs for further analysis) or each run needs to survive having >4min uncontaminated data?
Many thanks!
Ali