fMRI decoding nilearn - small voxel selection

Dear all,

I am using a decoder from nilearn as specified below. I am doing my own voxel selection by giving it a mask with a specific number of voxels (i.e. 100 or 500 strongest voxels from visual cortex). However, the less voxels I use the better is the accuracy of my decoder. A time resolved analysis has shown that a result with 100 voxels does not follow a normal shape (chance level at time point 0 and then increasing while looking at the stimulus), but just shows a flat line which is highly above chance level. (Using more voxels results in a normal time resolved shape, but has way lower accuracy in my timepoint of interest) My samples are equally distributed within each run. This leads to the assumption, that there is some kind of confounding factor in my data. Is anyone experiencing similar issues?

This is my code

   estimator = "svc"

    param_grid = {
        'C': [ 0.01, 0.1, 1.0],
        'max_iter': [ 10000], 
        'loss': ['hinge'],  
    }
    cv = LeaveOneGroupOut()
    deco = Decoder(estimator=estimator, mask=roi_100, standardize="zscore_sample", cv=cv, screening_percentile=100, scoring='accuracy', n_jobs=20, param_grid=param_grid,
                   smoothing_fwhm=None)

deco.fit(decoding_fmri_niiimgs, decoding_conditions, groups=decoding_session)
classification_accuracy = np.mean(list(deco.cv_scores_.values()))

Any help would be appreciated.

1 Like

More specifically, my question is whether this has to do with the decoding algorithm I am using and whether the low number of voxel may be prone to some kind of overfitting and resulting in implausible decoding results.

You should make sure that the voxel selection you performed is independent from the test data you are using. Otherwise the decoder may well be overfitting.
So you have meaningful warning messages ?
Best,
Bertrand

Dear Bertrand,
Yes, my voxel selection is based on a separate task. But this happens even if I choose the most “inactive” voxels in my tmap.
Sometimes I am getting the User Warning: “Liblinear failed to converge, increase the number of iterations.”
But usually the features are enough to train the model.

I’m also interested in the result here. Did you ever figure out the issue?