Dear experts,
I’m conducting intra-subject mvpa analyses to look at individual differences and I have a question about data report criteria. For each subject’s data, I tried different methods (e.g., cross-validation, with or without regularization, etc,) to gain the highest performance accuracy. However, not every subject got highest accuracy using the exact same method. For example, one subject’s data get the highest accuracy when I apply cross-validation using 5 folds, while the other subject’s data get the highest accuracy using 6 folds. Am I supposed to use the exact same method for all subjects’ data even that is not the optimal method for each individual? Or shall I just stick with whichever method gives me the highest accuracy for every single data? If so, how shall I report that in a paper, and is there anything to pay attention to when comparing results across subjects? Is there a rule of thumb in this sort of mvpa analysis? It would be best if you could provide me with a paper that does similar research.
Thanks,
Lily
You should use the same methods for all subjects. Otherwise, you’re biasing the results.
Assume that there is no effect in your data: you should get chance accuracy: yet tuning all parameters on a per-subject basis, you will unavoidably obtain a better-than-chance accuracy. This has to be avoided by all means. Bzw, it is not really interesting to know that 6fold works better for subject 2. You want to know what works better in average: the expectancy of your gain/accuracy.
If you want an adaptive strategy, you can use nested cross-validation, but I’d rather discourage this.
Hi Bertrand,
I got what you’re saying. An issue with my study is that the experiment is sort of adaptive/self-paced itself so that each subject gets different number of events for each condition. What end up happening is that for a few subjects that got fewer trials on a certain condition, their data has to be trained with higher number of folds because those certain conditions don’t have enough trials to make a subset if I use lower folds. If condition frequency for each subject’s data is already different, does it make sense if I use a more adaptive way to train the data?
Thanks,
Lily
I would advise you to strive to use the same strategy for all subjects.
Scikit-learn can give you some possibilities to have an adaptive cross-validation strategy with Stratified methods.
If cannot find a unique strategy for all subjects, try to define a strategy before getting results, and please write explicitly in the paper appendix what criterion you based your decision on.
You may also want to use the imbalance-learn library.
Do you know if nilearn
has somewhat similar function or do I have to go for scikit-learn
?
Nilearn using sklearn, but you probably need to call sklearn objects directly, given your specific needs.
I will look into that. Thank you very much!