Negative logarithm p-value threshold

Hi!

I’ve noticed that non_parametric_inference from nilearn outputs negative log10 p-values.
Can you please point me to some reading about why should we use neg log instead of just p-values?

And for significance threshold definition I found the following in the nilearn tutrial:

Since we are plotting negative log p-values and using a threshold equal to 1, it corresponds to corrected p-values lower than 10%, meaning that there is less than 10% probability to make a single false discovery (90% chance that we make no false discovery at all).

So if we want alpha = 0.05, should we use a threshold equal to 0.5 (which corresponds to less than 5% probability to make a single false positive discovery)?

Thanks a lot!

1 Like

I think the neg log scale just help accentuate/visualize very small p-values that might get lost on the linear scale … also its nice that small p values are represented as the darker color this way.

So if we want alpha = 0.05, should we use a threshold equal to 0.5 (which corresponds to less than 5% probability to make a single false positive discovery)?

For a threshold of alpha=0.05 the neg log10 threshold would be 1.3 instead of 1