Permutation Test

Hi All I have a perfectly balanced data set 50% condition 1 and 50% condition 2.

I am using an L1 norm with a LeaveOneOut Cross Val as my model. I am using FFA mask which has 223 voxels and have 4000 observations.

My accuracy is ~67.3%, which is great; but when I shuffle the Y values for permutation testing I get Permutation Test accuracies that are very close to 67%.

I’m a bit confused about how this is happening given that 223 voxels for 4000 datasets seems unlikely to overfit to this degree.

I simulated random data with the same 223 x 4000 structure and got an accuracy of ~50% as expected and yet when I permute my actual data I get accuracies that are around 67% or occasionally even higher. How can that be?

My understanding is that multi-collinearity would not affect model performance so I’m confused why there would be such large differences between my simulated dataset accuracy and my permuted dataset accuracy.

A post was merged into an existing topic: MVPA-ROI + Permutation Testing