Stats_permutation: p value extraction from null distribution

Hello Fellow TDT users,

I’m using permutation testing to test the significance of my first level classification results.

I was confused by the following lines in the stats_permutation function:

switch lower(tail)
    
    case 'left'

        if exist('bsxfun','builtin') % New method for Matlab 7.4+ (fast)
            p = (1/(sz_ref(2)+1))*(sum(bsxfun(@ge,n_correct,reference),2)+1);
        else
            p = (1/(sz_ref(2)+1))*(sum(repmat(n_correct,1,sz_ref(2))>=reference,2)+1);
        end
        
    case 'right'
        
        if exist('bsxfun','builtin') % New method for Matlab 7.4+ (fast)
            p = (1/(sz_ref(2)+1))*(sum(bsxfun(@le,n_correct,reference),2)+1);
        else
            p = (1/(sz_ref(2)+1))*(sum(repmat(n_correct,1,sz_ref(2))<=reference,2)+1);
        end

Shouldn’t it be the case that for right tail inference we should count the number of permutations that resulted in classification above the true classification value? If so, doesn’t the function @le does the exact opposite?

Also, what is the purpose of adding 1 to the count? As far as I understand the correct permutation is included in the permutation matrix, so the p value can never be 0 anyway. Am I missing something?

Many thanks for this great toolbox and for your help,
-Matan

re: my first question. I understand now that since n_correct is given as the first argument, it should be @ge and not @le. I guess I get very high p_values because many permutations result in classification accuracy that is equal to the true one, rather than above it.

Yeah, the coding reads a little confusing, but glad that it’s clear now!

In actual fact, p = 0.05 is totally acceptable, even though it is not our convention (because exactly 5% still belongs to the lowest 5%). The problem is that for discrete distributions, the bin width can often pass beyond the 5%, which makes it impossible to tell whether the result is significant or not. The only thing that helps here is to use a continuous results measure (or to smooth results slightly for searchlights, which is totally ok). You may want to try
cfg.results.output = 'signed_decision_values';
which will give you something like an accuracy weighted by the decision values (1*DV if it is correct, and -1*DV if it is incorrect).

Yes, if the correct permutation is included, you don’t need to add the 1, but if you do Monte Carlo sampling (which most people do), then it is only included by chance. Ideally I would always just include the original permutation and I might make that a default in the future. If you are confident your original permutation is included, you can remove the +1 and it would make your result slightly less conservative.

Just in case you are interested (or anyone else), this paper for me still is the best primer on the topic (see 4.2 for my explanation above), and it does an excellent job at explaining the important distinction between randomization tests, permutation tests, and Monte Carlo permutation tests, which are commonly confused (the latter two I just subsumed under permutation test in TDT).

1 Like