For some background, I’ve implemented a (large) number of decoding analyses in TDT (3.999) with 2 labels and 46 runs. These are ROI analyses for a single ROI. For many of these classifications, the data are unbalanced (say for e.g. 2 patterns for label A, 6 patterns for label B per run). I’ve used a sub-sampling approach with multiple repeated bootstraps (say for e.g. 10). I’ve requested balanced accuracy as well as predicted labels and true labels. The brain images are organized in the design matrix as: first all the pattern 1s across the 46 runs, then the pattern 2s, etc till the pattern 8s.
The predicted labels I get out of this is a single vector (3680×1, where 3680 = 46 (runs) x 10 (bootstrap reps) x 8 (6 + 2 patterns per run).
The true labels I get out is a 1 x 460 (46 (runs) x 10 (bootstrap reps) cell array with each cell containing an 8 x 1 vector (6+2 patterns per run).
I’m trying to match these up against each other, but I’m struggling a bit with figuring out how the predicted accuracy is organized.
Could someone please help verify that the predicted accuracy vector is organized as (top to bottom):
within a given pattern:
within each Pattern-run conjunction: