Apologies for the delay, I was on a vacation and only just returned. Regarding the searchlight radius: There is no principled approach for choosing a radius, it just depends on the size you would find appropriate. 12mm seems quite common (which is 4 voxels for 3x3x3 and 6 voxels for 2x2x2). Please make sure you really choose voxels and not mm!
Glad you found the bottleneck. It seems you a running the analysis on single trials. I would try scaling the data first to see if that improves the speed, using
decoding_scale_data. I would also consider not using single trial estimates (64 betas per condition) but really only one beta per condition per run. If you really want to use single trial estimates and if scaling doesn’t help, then I’d encourage you to reduce the classifier cost c to 0.01 or 0.001. This can be done by setting the following:
for classification_kernel (which is an internal trick for speeding up the computation by precomputing the linear kernel)
cfg.decoding.train.classification_kernel.model_parameters = '-s 0 -t 4 -c 0.01 -b 0 -q';
and if you don’t want to use that trick:
cfg.decoding.train.classification.model_parameters = '-s 0 -t 0 -c 0.01 -b 0 -q';
If that is still not satisfying, then you should probably use a different classifier. I would probably recommend using crossnobis (see our template) since it’s comparably fast, generally performs pretty well, and yields nice continuous results rather than binary accuracies.