Smoothing parameters & cluster estimation w/ 3dFWHMx and 3dClusterSim for ISC

Hi everyone!

I have completed an inter-subject correlation (ISC) analysis with my data and I am now trying to do my cluster thresholding and multiple comparisons correction. I used 3dISC on AFNI to do this so I would like to continue using AFNI for this next part. I want to use 3dFWHMx and 3dClustSim to get my smoothing parameters + cluster size thresholds, respectively. However, I am unsure as to what to input into 3dFWHMx to get my smoothness parameters. I know that the usual method would be to enter the individual subject data from the first level analysis and then average out the smoothing parameters from each subject, but the input into 3dISC was not individual subjects – it was the correlation maps between each pair of subjects in my study.

Should I be estimating the smoothness parameters from my pairwise ISC maps, as this is the actual input that went into 3dISC? Or should I be estimating the smoothness based on the individual participant data?



Sooo, this is an inherently tricky thing for multiple reasons.

The traditional way to prepare to do clusterwise correction is to estimate the spatial extent of correlations among the FMRI noise: the “noise” signal in task FMRI is the residuals from modeling (in AFNI, often called “errts”= “error time series”). In olden times, one estimated the spatial extent of noise as Gaussian; in modern times, we use the “ACF” (=autocorrelation function) parameter fitting, which is done using 3dFWHMx (or, even easier, using -regress_est_blur_errts as an option—hopefully you are using to set up your single subject processing!). Once you have the ACF parameters for each subject, you can typically average these across a group (for a given site/acquisition protocol these tend to be quite similar across a group), take the group mask and use 3dClustSim to estimate the size of clusters in simulated noise with those spatial characteristics-- for your desired sidedness of testing (see Chen et al., 2018!), voxelwise p-value threshold and FPR/alpha level, you can see what cluster size the noise-only simulations produced, which becomes your minimum cluster size for your task data. There are still subtleties to this (the residuals are not pure noise; they contain structure from our inability to model the signal perfectly, for example, but such is life).

Now, some subtlety comes in when you have non-task FMRI data, such as resting state or naturalistic scans: your output time series of interest after the modeling/regression stage is your residual time series! So, we are in the odd situation of not having a separate “noise” and “signal” estimate. What do we do about clustering? Well, we actually default to the above paradigm, the same programs on the same residuals to estimate the clustersize of “residual-only” data for the group. This is somewhat rooted in practicality and in the empirical fact that the spatial estimates of structure in the residuals of resting/naturalistic data are quite similar to those of task data (likely due to our continued inability to make detailed FMRI models; sigh). Anyways, this should still provide a pretty good estimate of the spatial extent of noise-only (or "uncontrolled) structure in the time series; if anything, it may be a conservative estimate of that, because having real structure in there would tend to bump up the apparent size of noise-only clusters, making cluster size thresholds more conservative.

For your ISC data, where your actual analysis is on the paired correlation maps, the above seems like a reasonable way to approach clustering, as well. You are basically trying to set a clustersize threshold to ask the question: how big should a cluster be to be likely not due to chance/noise alone? Looking for the spatial extent of “noise-only” in your acquired time series seems a reasonable way to approach that-- this is helped by the practical considerations noted above, including the fact that subjects in the same protocol tend to have similar spatial-extent-of-residual-structure characteristics.

For some explicit code related to these things, you might want to check out these pages:
AFNI code for Taylor et al., 2018
AFNI code for Chen et al., 2018


1 Like

Wow, thank you so much for the detailed explanation, I really appreciate it! I find the whole clustering idea kind of hard to grasp but this has definitely helped me understand it more.

I ended up using the averaged ACF parameters from my pairwise correlation maps, but I will try running 3dFWHMx on my individual subjects and see if that affects anything. I am finding it hard to parse out any meaningful clusters from the sizes that it has been recommending – since my input into 3dISC were all my pairwise maps, and I have 80 participants, that corresponds to 3160 observations at each voxel. Since my sample size is so high, pretty much all the voxels in the brain survive even very stringent cluster sizes and p/q thresholds. I.e., for a p-thr of 0.001, and a cluster alpha of 0.02 (NN3 bisided), 3dClustSim recommends a cluster size of only 188 voxels. When I set this as the cluster size, pretty much the entire brain survives the multiple comparisons correction. For purposes of displaying my data I have been setting the cluster sizes much higher, so that it is easier to interpret visually – i.e., setting it at what it recommends for a pthr of 0.02 and cluster alpha of 0.02, 1463. I feel like I am probably cutting out meaningful data when I do so, though… but it’s hard to show a figure where the whole brain is lit up with significant ISCs!

OK. I would start with going back to 3dFWHMx on the residuals (“errts”) time series, averaging the ACF parameters, and going to 3dClustSim with that. I think a standard place to start would be p=0.001, bisided, alpha=0.05; whichever NN level you feel is appropriate is certainly fine, it just has to be maintained consistently with 3dClustSim and 3dClusterize.

ISC results are a different beast, as well, than standard FMRI analyses; indeed, the maps in Gang’s papers show a fair amount of activity (though it still seems statistically appropriate).

Gang has been working on a region-wise approach for ISC:
as in this paper
… but it is veeery sloooow at the moment (of order weeks for group analysis, depending on the size of the group!), so that might not be practicable for your case; you might still want to ping him, though (he doesn’t have a neurostars account; you could use the AFNI Message Board, or email Gang Chen directly) for questions about that.


I went back today and ran 3dFWHMx on the individual residual maps, and it pretty much solved all the problems I had! After applying the new ACF parameters and cluster sizes resulting in 3dClustSim from that, at a level of pthr = 0.001 and alpha = 0.05 bisided they look WAY more reasonable now. Now the whole brain isn’t surviving the correction, only the most significant parts! Makes my visualizations MUCH more interpretable.

I have taken a look at that paper before, glad to hear that it’s being worked on! I have been tempted to do my ISCs in parcellations before (i.e., parcelling the cortex and averaging out the time course in all voxels in that cortex for each subject, and then calculating the ISCs in those between the subjects) since it cuts down so much of the computation time, but I felt like there might be more to the statistics there than meets the eye, so I’ve been hesitant.

Thank you again for all the help! I really appreciate the work that all of you at AFNI put into actually interacting with the users and getting involved in the troubleshooting process, it is one of the main reasons why I like using it over some of the other software packages out there.

Cool, I am glad that was useful. And happy to discuss any further questions.