Question about #tdt predicted labels output

For some background, I’ve implemented a (large) number of decoding analyses in TDT (3.999) with 2 labels and 46 runs. These are ROI analyses for a single ROI. For many of these classifications, the data are unbalanced (say for e.g. 2 patterns for label A, 6 patterns for label B per run). I’ve used a sub-sampling approach with multiple repeated bootstraps (say for e.g. 10). I’ve requested balanced accuracy as well as predicted labels and true labels. The brain images are organized in the design matrix as: first all the pattern 1s across the 46 runs, then the pattern 2s, etc till the pattern 8s.

The predicted labels I get out of this is a single vector (3680×1, where 3680 = 46 (runs) x 10 (bootstrap reps) x 8 (6 + 2 patterns per run).

The true labels I get out is a 1 x 460 (46 (runs) x 10 (bootstrap reps) cell array with each cell containing an 8 x 1 vector (6+2 patterns per run).

I’m trying to match these up against each other, but I’m struggling a bit with figuring out how the predicted accuracy is organized.

Could someone please help verify that the predicted accuracy vector is organized as (top to bottom):
within a given pattern:
within each Pattern-run conjunction:



This sounds like a very specific case! At least it’s only 2 labels. :sweat_smile:

predicted_labels is a flattened vector across all decoding steps (in your case bootstrap repetitions), whereas true_labels iteratively places each decoding step in a different cell. Hence, if you flatten true_labels, you should have the corresponding labels.

true_labels_new = vertcat(true_labels{:});


Thanks Martin!

Did I mention that I need to do 254 different classifications? :sweat_smile:

Meanwhile, a big thanks to you and Kai for a very easy-to-use toolbox that made this monster analysis a cinch to implement.

1 Like