For second-level statistics, you can normally assume that your decoding-level accuracies are not positively biased (i.e. that chance level is really chance and that effects don’t seem larger than they really are). That means you can run a simple t-test. When people tell you that you must use a permutation test, that only applies to the decoding-level results, because cross-validated results are not normally distributed.
The caveat is that the hypothesis you are testing at the second-level is not a random effects test, but turns out to be a fixed effects test. The reason is that the null distribution cannot go below chance (true below chance does not exist). And since chance is 50%, this leads to a distribution with zero variance. For that reason, all remaining variance has to come from the within-subject variability. So, in essence your test assumptions that you can ignore the subject-level variability by running a random effects test are violated. This is all explained in detail here. You can also have a look at my video lecture on MVPA statistics here.
The solution is to run permutations at the decoding-level and test if a notable proportion of participants is carrying an effect. This does not work for the type of analysis you ran since there are too few permutations. I know that Carsten is currently working on a statistical testing procedure that should allow you to run valid permutation testing at the level of the model estimation. Until then I would suggest using a t-test and making sure you mention in your publication that this is in effect a fixed-effects test, citing the above reference.