hiya!

i’m in the process of performing second-level (population-level) statistics on subject-level, searchlight mvpa decoding results to test the prevalence hypothesis–that the regional effects are typical of the population.

i am moving forward with the i-test for prevalence approach, which is a modification of the of the prevalence inference approach using the minimum statistic that i’ve seen discussed on this forum.

i know second-level statistics on accuracies is a topic of ongoing debate, but i haven’t seen any more recent posts on the matter. what do people generally think of the i-test approach?

also, i was wondering if there were any issues with simply performing a binomial test on the number of subjects that have significant decoding at the first level? the idea would be to compute p-values at the subject-level using permutation tests, assigning a significance threshold to binarize whether subjects had a significant accuracy, then performing a binomial test on those values with the null probability = 0.5. i think this would test whether the number of subjects with significant decoding accuracies was greater (more prevalent) than expected by chance. does this make sense?

thanks!

cooper