How to choose amount of smoothing

Hi all,

I would like to know if there are any standard strategies to choosing the amount of smoothing (FWHM in mm) to use with bold data?

I see the average choice is between 4 - 10 mm, and I lean towards using the minimum, however would like to know which methods allow assessing the improvement in SNR.

We are interested in contrasting the different activation maps that followed 3 different types of auditory stimuli.

Thank you very much,


Hi @Uri_Shinitsky ,

It really depends on your ROI size, and balance between tolerance for spatial bluriness vs SNR goals. So, no single solution. If you have really small ROIs (e.g., a hippocampal subfield), or are interested in neighboring areas and would not want to blur the signal between them, then you probably want small smoothing kernel. If you have a low SNR acquisition and have large and/or distant ROIs, then you can probably afford a larger smoothing kernel.

A good place to start is looking at similar publications, or trying out different smoothings yourself on an a separate independent (but ideally similar) dataset.


1 Like

Another thing to consider is that if you are planning to correct for multiple comparison using an approach random field theory then your statistical must reach a “sufficient level” of smoothness.

From the top of my head I think that the smoothness should be 2-3 times the voxel size. But don’t take my word for and double check before making decision.