Optimize runtime, and pulling coefficient weights when decoding

Hey,
I’m sorry if I am being redundant here, but I could not find a thread for what I was looking for. I am running a decoding process on a large amount of fMRI data, and I am trying to improve the runtime, I have enabled n_jobs= -1 to use all available cores, but I am curious if there is a better way to do this? For reference I am following the example on nilearn for decoding, decoding with haxby

Also, in the above decoding example, I am really interested in pulling out the coefficient weights at the end. I know that this isn’t straightforward when using a different kernel like an “rbf”, any suggestions on how to achieve this?

Thanks again for your help!
Cheers,
Nichollette

To reduce computation time, there are only few possibilities: choose a simple (linear) classifier, and keep default hyperparameters e.g. C=1 for an SVM. The danger is that these choices may be suboptimal.
For the sake of interpretability, you should probably use a linear classifier, that has explicit coefficients on the features. With fMRI, there is little to no value in using non-linear classifiers.

1 Like

Thank you for your response, happy to know I am doing most of what I can and that using the linear classifier is sufficient. Thanks again!