# How to model the trials from different conditions ?

We have 3 sensory modalities (A, H, AH - each containing 2 textures of stimuli) and we want to decode the 2 stimuli textures in each of the sensory modalities : we want to run 3 separate decodings with the same set of data.

For example, to decode the texture during the A modality, we will model in the GLM one beta per trial of the A condition (for each session/run). For the other conditions, we have two possibilities :

• 1/ Model also many betas for the other sensory modalities : H and AH (one beta per trial)
Or : *2/ model only one beta per condition : one beta for H, one beta for AH

Which would be the most appropriate and statistically correct ?

Keeping in mind that we will then also test the decoding of texture in the H and AH conditions (so a total of 3 decodings on exactly the same dataset).

If we choose the first option, we can use the same GLM for the three decoding procedures (decoding for A, for H and for AH). If we choose the second option, we will have to use a specific GLM for each modality (one GLM for A decoding, another one for H decoding, and a third one for AH decoding)

Hello Jeanne,

To have a balanced dataset when trying to decode (same number of samples for each conditions) and images that have the same meaning across conditions, I would go for option 1.

E.g If you reuse your decoder on an unseen trial, you want it to be able to discriminate if the statistical image is from a trial of A or something else, not whether its a trial A or a beta summarizing many other images from other conditions.

And yes in this case, the same GLM is well suited.

As a sidenote on how many decoder you fit, if you fit one classifier for example SVC from scikit-learn, to a dataset where X is your images and Y your conditions, and Y contains more than two different values (let’s 3), your in the the setup of multiclass classification.
Classifying one class against everything else is called One versus Rest (or ovr for short) and it’s exactly what most scikit learn classifiers do by default. So when you do

svc.fit(X,Y)
implicitely three classifiers will be fitted one for each class against the rest and you can find their coefficientsin svc.coef_[i] where i is the number of each class.

Be careful to use a cross validation scheme though