Correction needed to spatially interpret coefficients of decoding models (classifiers) applied to fMRI data

Hi !
I have executed a multiclass SVC decoding model applied to fMRI data using sklearn. I have then reprojected the coefficients of the models onto the 3D space using the inverse_transform method on a nifti_masker, from nilearn, in order to identify the brain regions that are the most predictive in the classification process. I am now addressing my question here because I came across this article (On the interpretation of weight vectors of linear models in multivariate neuroimaging - ScienceDirect) which suggests that direct interpretation of a backward model coefficients, such a decoding classifier, could lead to wrong results. The authors therefor suggest to apply a correction (based on the covariance of the data) in order to spatially interpret the coefficients of the model. There are a lot of tutorials on decoding (especially with nilearn), but I have not seen any mention concerning the interpretation of the model’s coefficients. I am now looking for some advice on the necessity of such a procedure, whether if it is common or not, and how to implement it. My code is available on github and let me know if other details on my analyses are required to further discuss this topic. Thank you for any future advices or suggestions.

Greetings,
Dylan Sutterlin B. Sc.

Yes, there are serious issues with interpreting svm weights, even from linear svms. Corrections can help, but there are still a lot of caveats and assumptions. If possible, I suggest starting from ROIs (or something like a whole-brain parcellation); describing which ROIs/parcels have signal, rather than starting with the entire brain (or a region larger than you care about) and then trying to figure out which “bits” of it are most important. If you need to work “big to small”, something like random forests might be more suitable.

I’d second @jaetzel 's suggestion to look at a whole-brain parcellation rather than whole-brain voxel-level SVM weights, if this is indeed what you are considering !

I’d also generally recommend considering permutation importance to assess how particular features (like a given ROI) contribute to classification accuracy : 4.2. Permutation feature importance — scikit-learn 1.1.3 documentation

HTH,

Elizabeth

Thank you both for your interesting suggestions.
In order to interpret the weights from parcellation rather than the voxel weights, do I need to apply the parcellation on brain maps (x data) prior to the SVC training phase, or on the weight contrast maps (in my case one vs one maps that result from the model)?

Also, assuming the parcellation strategy, doesn’t the same problem linked to backward model weight interpretation arise (as we still try to spatially interpret the model’s coefficients but at a broader level) ?

Thank you very much,

Dylan Sutterlin

Classify using voxels/vertices within individual parcels, and interpret results at the level of parcels, so the weight interpretation problem doesn’t apply. I have a (non-scikit-learn) example at MVPA Meanderings: DMCC55B supplemental as tutorial: positive control "buttons" classification analysis

1 Like