I have been using the TDT toolbox (a great tool!), and I am writing to ask for advice on the followup analyses, if possible, after obtaining MVPA results.
We have conducted a whole-brain searchlight multivariate pattern analysis between images of patients and controls. Two regions were identified after voxel-level FDR correction for multiple comparisons (original p values were determined based on 5000 permutations). My question is that whether there is any way I could take to further explore the nature of the distinctive patterns between the two groups, whether it was driven by overall activation differences between the two groups or different activation pattern, or both?
Any input will be highly appreciated!
In some projects I’ve worked on, we’ve deliberately mean-centered all our voxel patterns (either betas or percent signal change belonging to the same sample) prior to decoding, therefore removing the possibility that the classifier is detecting mean differences between conditions (or in your case, groups). Decoding in this case would suggest that differences can be attributed to the overall pattern, rather than some combination of activation + pattern differences.
One note of caution, however: I’ve really only used this approach for within-subject decoding between conditions, and I’m not sure if there are any extra things to consider when decoding between subjects. Nevertheless, it might be worthwhile approach related to your research question.
Just my two cents!
I think the question is what you mean by overall activation differences between two groups. We wrote a little piece on this (quite dense, but maybe useful), if you cannot access it behind the paywall, there is also a preprint. Check out chapter 5.2 for that specific question.
In short, removing the mean across voxels only removes “overall activation differences” if the mean effect is the same in all voxels (which almost never is the case). If the effect varies between voxels, then you will just spread the mean signal across voxels. You can take the sensitivity of each voxel into account by either calculating a PCA on your data and taking the first component (involved, not mentioned in our paper, may be affected by other factors), or by calculating a mean pattern across both groups and regressing that effect across groups, and working on the residuals. This is equivalent to identifying an angle in multivariate space that is occupied by both groups, i.e. the overall pattern is the same but of a different amplitude. I think that is what most people have in mind when thinking about “overall response differences”. Also, we mention in our paper when this approach no longer works.
Another possible approach would be to use the Haufe method on your SVM weights to reconstruct the discriminative pattern (which gives you a contribution of each voxel that is unaffected by the noise covariance) and then visually inspect the weights (reviewers used to ask this a lot in the early days to see if it’s only a “simple” blob-like response). If most weights are positive or negative, that indicates that the overall activation seems to be higher in one condition than another. But this is not a statistical test of this effect.
Finally, you may ask yourself why it would matter if the overall activation is different. If you can show something similar with more classical analyses: great! If you want to make claims about “fine-scale patterns”, that’s difficult anyway. So, in most cases I don’t see a reason to control for overall activation differences anyway.
I just realized that I have not replied! I wanted to thank you for the detailed replies. These are very useful!
I actually did not look for ways to control the overall activation differences, but to prove that these differences contributed to the distinct activation patterns observed between groups. I followed your second suggestion, and used the TDT toolbox to extract the pattern values for this purpose.
Thanks again for your help and the invention of the TDT toolbox!