Thanks for your answer!!!
I agree that in theory we can end up with a eigenvalue whose eigenspace has more than one dimension (and then we have two or more orthogonals eigenvectors that span this subspace). If we want to play a little bit with the math and consider the diagonalization of the covariance matrix (which is symmetric and thus diagonalizable), we would have
where P is the orthogonal matrix containing the eigenvectors and D is the diagonal matrix containing the eigenvalues. We can interpret these eigenvalues as the proportion of variance aligned with this direction and therefore a degenerative eigenvalues would mean that two orthogonal directions would have exactely the same variance explained (this is the exactly that bothers me here). Indeed, if we consider that the explained proportion variance along a given direction follows from a continuous unknown distribution between 0 and 1, which I don’t think is too restrictive, it would be very unlikely that
where we consider two solutions from the characteristic polynomial. If the data is obtained experimentaly it sounds very unlikely to me.
Did you already observed such degenerancy in experimental data? What kind of data/measurements could produce such an amazing results?