Happy New Year everyone,
I have a general question:
Smoothing (after fmriprep) seems to be done very differently across studies (I’m mostly thinking of resting state data here, but I guess it applies to task based data too) in terms of which kernel size (if gaussian FWHM 5 vs 6 vs 8 etc) or when (before after FIX) or at all.
I’m wondering how ppl feel about other types of smoothing or downsampling procedures to potentially increase SNR, e.g. by using a Hanning filter or increasing voxel size artificially.
As background: we started thinking about this while working on lesion patient data and we’re trying to find a good way to look at single subject data with ICA to identify known rs-networks (or not) in single subject space (so can’t use fix or aroma).
Of course introducing a new parameter introduces another factor to check (e.g. with cluster stability analyses (thanks for the hint Satra!)), but since I’m rather new to the depth of rs-frmi methodology - I thought I crowdsource the wisdom in the neurostar community. And maybe other people are interested in the answers (or thoughts on it) too?
Thankful for thoughts and feedback!