Dear all,
I am using a decoder from nilearn as specified below. I am doing my own voxel selection by giving it a mask with a specific number of voxels (i.e. 100 or 500 strongest voxels from visual cortex). However, the less voxels I use the better is the accuracy of my decoder. A time resolved analysis has shown that a result with 100 voxels does not follow a normal shape (chance level at time point 0 and then increasing while looking at the stimulus), but just shows a flat line which is highly above chance level. (Using more voxels results in a normal time resolved shape, but has way lower accuracy in my timepoint of interest) My samples are equally distributed within each run. This leads to the assumption, that there is some kind of confounding factor in my data. Is anyone experiencing similar issues?
This is my code
estimator = "svc"
param_grid = {
'C': [ 0.01, 0.1, 1.0],
'max_iter': [ 10000],
'loss': ['hinge'],
}
cv = LeaveOneGroupOut()
deco = Decoder(estimator=estimator, mask=roi_100, standardize="zscore_sample", cv=cv, screening_percentile=100, scoring='accuracy', n_jobs=20, param_grid=param_grid,
smoothing_fwhm=None)
deco.fit(decoding_fmri_niiimgs, decoding_conditions, groups=decoding_session)
classification_accuracy = np.mean(list(deco.cv_scores_.values()))
Any help would be appreciated.