Nilearn -- classification examples other than those on wiki

Hello,

I want to use nilearn to do MVPA and also to validate my analysis done elsewhere.

However, the haxby example is a little different from my own data and study.

For example in Haxby, it is a block design of specific types of stimuli (faces/houses etc.) whereby each block consists of some fixed duration in which 5 or so stimuli are presented within a block. And each block type is repeated only once in the run. So one type of stimuli per run.

Also, in this example the event or onset files are based on volume TR numbers. Whereas my events are modelled on onsets (I guess I can convert my onsets to TR’s by dividing by the TR, but I don’t really want to do that)

My design is an event related design where several repetitions of the same stimuli occur one after another randomly. And I have more than one repetition for each condition within a run. So again I don’t know what I should do. Because in Haxby, each block type has 1 block per run.

Finally, I may want to do a leave one chunk out or one out (whereby trials from each run can be mixed for training/testing) instead of a leave one run out procedure because there seems to be some practice effect inherent to the task which seems to affect classification accuracies.

Can someone please guide me or show me the functions which I would need to employ this kind of analysis? Or share some example code that does this?

I think that what you need is decribed here: Beta-Series Modeling for Task-Based Functional Connectivity and Decoding - Nilearn
LMK if you need more help.
Best,
Bertrand

Thank you for your response.

As you suggested, I used the beta series script with the LSA method.

I deleted the irrelevant code sections (such as LSS and connectivity)

And added this part under the script:

from sklearn.model_selection import LeaveOneOut

from nilearn.decoding import Decoder
from itertools import chain

decoder = Decoder(
    estimator="svc",
    standardize=False,
    cv=LeaveOneOut(),
)

betas = list(lsa_beta_maps.values())
betas = list(chain(*betas))
beta_count = len(betas)//len(lsa_beta_maps.keys())
labels_1 = ['language'] * beta_count
labels_2 = ['string'] * beta_count
all_labels = labels_1 + labels_2

decoder.fit(betas,all_labels)

Here, how can I define chunks? So that K samples of language (ie. 3) is used for testing and the rest (9) are used for training and for everyother combination? Also how to check the classification accuracies after doing this?

You need to define a cross-validation scheme, where you split your data are divided into train and test set.
One way to do it is to define a groups variables that tags your samples (typically it separates data into independent chunks such as runs, sessions and subjects). Then the decoder.fit() method uses the groups to perform the cross-validation automatically.
See e.g. Decoding with ANOVA + SVM: face vs house in the Haxby dataset - Nilearn

If we were to do decoding using this example dataset (which has one run) how can I define a CV scheme? Because this example code is very similar to my data, so I want to do my cross validation based on chunks of beta estimates rather than runs/sessions or subjects.

Just specify the chunks you want to use, by labeling each sample with a ‘group’ id.
Note however that different estimates froma given run are not statistically independent, hence you will probably end up with optimistic estimates of the generalization ability of the classifier.
HTH,
Bertrand

Thank you.

Is there a way to print / follow the decoder is running? (ie., which CV fold is it in, how much time is left) etc.

Not that I know. If you want to have more control, you rather not use the Decoder object and directly call sklearn’s objects.
Best,
Bertrand

I tried decoding with my own data.

The decoding is taking very long. And I ended up getting a memory error. So now I am trying to decode a smaller sample of the data to see how it goes.

Let me investigate further. If I can not figure out the reason, I may jump in the office hours.

Thanks.