I’m using TDT to test whether neural representations of objects within a specific ROI differ dependent on the sensory modality through which the object was presented.

For that purpose, I ran an RSA analysis following the decoding_template_similarity.m. However, I used ‘cor’ as cfg.decoding.train.classification.model_parameters (instead of ‘euclidean’ as in the template). I subsequently calculated a group correlation matrix by averaging the RSA matrices over subject. Visually inspecting this matrix lets me assume that activation patterns are similar for objects presented through the same sensory modality, but differ between senses.

In addition, I ran a cross-classification analysis within the specified ROI with the default decoding setting. That means I trained the classifier on objects presented in one sense and tested it on objects presented in the other sense. This yields above-chance accuracy values. I would therefore conclude that activation patterns are similar for the two sensory modalities.

I’m now wondering how those “opposing” results of RSA and cross-classification could be explicable.

I could imagine that there are three different group of voxels within the ROI. One group distinguishes best between objects independent of the sensory modality through which the objects was presented, which then provide the best crossmodal classification. A different set of voxels distinguishes best between objects presented in the one modality, another group distinguishes between objects within the other modality. These two groups would then provide the best intramodal classification.

I’m wondering whether the cross classification analysis relies on only highly informative voxels within the ROI (in this case the ones that distinguish best between modalities) and disregards other remaining voxels. But the RSA, in contrast, considers all voxels within the ROI when calculating the correlation measure, which would be a mix of “crossmodal” and “intramodal” voxels and therefore result in noisy measures.
Is that true? How can I test whether there actually exist “crossmodal” and “intramodal” voxels?

I would be very happy if someone could help me with this matter.
Best,
Danja

Maybe I don’t understand the task, but is it possible that you just find both? I.e. that there are strong differences between modalities as indicated in the RSA analysis, and weaker object-specific differences that are similarly expressed across modalities as indicated in the cross-classification analysis? So, if you take the part of the similarity matrix that belongs to each modality and correlate their lower triangular part with each other (see code below), is the correlation larger than 0? If so, then you may have both effects. There is no reason to assume different groups of voxels, although it is interesting to think about the source of this shared representation.

Assuming you have 16 objects and two modalities, i.e. a 32x32 similarity matrix: submat1 = simmat(1:16,1:16); % simmat is your similarity matrix submat2 = simmat(17:32,17:32); ind = tril(true(16,16),-1); subvec1 = submat1(ind); subvec2 = submat2(ind); sim = corr(subvec1,subvec2);

Also, if cross-classification is above chance, this does not mean there is modality invariance, only modality tolerance. It would only speak in favor of invariance if the decoding accuracy between modalities is roughly the same as the decoding accuracy within (assuming equally sized training data).

Thanks for your reply, Martin! Yes, it helped.
And you’re right, there could exist both, stronger differences between modalities and weaker object-specific differences that are similar across modalities.

However, correlating the lower parts of the similarity matrix, as you suggested, gives me a negative r-value of -0.22.

Yes, training and test data are of equal size. The decoding accuracy I get from the cross-classification is as low as the accuracy of the “worse” intra-model classifier. That would then indicate modality tolerance only but no invariance, right?

Hmm, that is a little unusual. Since they are on independent data, I would expect them to be uncorrelated or positively correlated. Negative seems weird, and I would only expect that for e.g. simmat(1:16,16:16) with simmat(17:32,1:16). But it’s possible. If the matrices are very small, it can of course happen by chance. If your data for the similarity matrices is from the same run, it is possible that the regressors are not orthogonal, in that case this could lead to a negative correlation between the betas, which could also explain the differences in the similarity.

Yes. I would say it’s more difficult to show invariance than tolerance, and with claiming tolerance you are always on the safe side.