Demeaning &/or normalizing data before MVPA analyses

Dear all,

I’ve seen in some papers authors “demeaning” the data prior to running MVPA analyses, to basically take out the univariate differences (more activation in condition A than B) and only consider patterns differences.

To me, that makes sense especially for cross modal decoding as in areas previously defined as “unisensory” (eg. for modality A) we will probably get way less activation for modality B, and that could influence the decoding (see for instance Smith et al. 2011). However some have warned about the potential negative effect of these methods on the interpretation of the data (I’m thinking of Ramirez’s paper for instance - who talks more about RSA though).

Then, there’s also the z-scoring of the patterns, in which the training data are normalized (z-scored) across conditions and then given to the classifier.

I have been looking at procedures to perform MVPA analyses in a more relevant way, especially as we use two different sensory modalities (either Auditory & Tactile or Visual & Tactile) and also do cross modal decoding, for which such methods could be actually pertinent.

What is your take on these methods and in which context should they be used ? theoretically, what’s your opinion on their use and their impact on subsequent interpretation of the decoding performances ?

Thanks !!
Jeanne