Question About ROI Analysis and Mask Generation in AFNI

Hi everyone,

I have completed the group-level data analysis using AFNI, and I would like to proceed with the ROI analysis. My plan is to first generate a mask by using the whole brain from the group-level analysis. I will use the clusterize function to obtain this mask, and then I’ll extract values from each subject’s second-level data using this mask.

Specifically, I will open my group-level analysis in AFNI, set the p-value to 0.001, choose clusterize, and use the default cluster size of 40. After that, I will click on report and save the mask, which will generate the code below.

MLE_All_AcrossCond+tlrc.HEAD -idat 1 -ithr 1 -NN 2 -clust_nvox 40 -bisided -3.3982 3.3982 -pref_map Clust_mask

My question is: what should I set the cluster size to? Can I directly use this mask as the ROI mask for my subsequent analyses? And do I keep all these settings and parameters unchanged, using just the default values?

Thank you!

Hi-

A couple things here:

  • the “clusterize” button in the GUI is certainly fine to use, but there is also a command line program called 3dClusterize you can script with, to be able to run repeatedly. Actually, when the clusterize button in the GUI runs, it will place a copy of the 3dClusterize command it is running internally into the command line, and you can use that later.
  • If you are running ROI-based analyses, typically this should mean that you did not blur (=spatially smooth) your FMRI data during processing.
  • ROI-based analyses are typically not cluster-wise corrected. Those might have FDR-based or other correction.

If you did voxelwise, analysis, cluster correction comes in.
What is the the correct cluster value? Indeed, that is a question that has plagued researchers since the dawn of civilization. The main argument for clustering is:

  • doing a voxelwise analysis is a “massive univariate analysis”, and if you run approx 50,000 simultaneous tests, you should adjust for that in your thresholding for what is labelled as being statistically significant. That is, even with holy threshold of p=0.001, any voxel that survives that threshold shoudl not be interpreted as “only occurring 1/1000 times by chance if there were no underlying effect”.
  • The typical voxelwise adjustment is to correct for family wise error (FWE) error rate, essentially trying to add another condition to control the expected number of false positives across the tested set of items. Clustering is a common way of doing that in neuroimaging–the idea is that bigger “islands” that pass the initial threshold are more believeable and less likely to happen by chance. Islands can form randomly by chance “high” statistic values, and the expected value due to noise is depends on the smoothness of noise in the data—if noise is splatted around in big, smooth splotches, then high-statistic islands are more likely to happen. So, if you can estimate the spatial extent of noise-influence, that can help create a “island size” (or cluster) threshold in this scenario. Note that clustering is kind of ad hoc—and it is very biased against small regions (e.g., subcortical nuclei), that are small just be being small and real activations can be missed. Anyways, the programs for estimating spatial extent of noise in AFNI are: “3dFWHMx”, to estimate spatial autocorrelation size of noise based on FMRI residual time series, and then “3dClustSim”, which uses that information to simulate island creation and output sizes for a given voxelwise (p-value, often p=0.001) and clusterwise (alpha value, often alpha=0.05) threshold. Other software like SPM use things like random field theory to estimate cluster size; one can also avoid clusters and go for a permutation-based methodology, either in AFNI or FSL (or others, likely, too).
  • Probably a more important aspect, though, is thresholding is arbitrary. They are sensitive to mask structure, threshold values and more. There are methods that try to be more omnibus, like FSL’s TFCE and AFNI’s ETAC, but there will always be an arbitrariness to thresholding.

Whenever reporting results, whether voxelwise or ROI-based, there are some other important considerations related to thresholding but going more deeply beyond it into interpretation and meaning. Believing that results above the double threshold are the only real ones and nothing else has value or should be shown makes results reporting very sensitive to processing choices, and neither FMRI physics nor human biology nor mathematics really works that way; for this reason, using transparent thresholding in any of these methods seems important.

–pt