Decoding the mean of multiple stimuli labels

Hi all!

This is a conceptual questions rather than related to the actual software (I use TDT SVR). In my experiment, I show participants 1 to 4 orientated gratings (gabor patches) sequentially. Their orientations (i.e. stimulus labels) are independent from each other, but counterbalanced over the sequence positions.

I see a classic “load effect” when I decode each orientation in each of the load conditions (1-4), where items in load 2-4 have a lower decoding accuracy than load 1.

If I train on the average orientation label (in condition load 2-4), then the decoding accuracy increases and is no longer different fron load 1. I think this makes sense, as the average of all orientation labels may be closer to the actual chance level and then the “chance-level” guess is suddenly accurate. However, I may also completely misunderstand what’s happening here?

Does this result make sense given my experimental parameters?

I’m grateful for any input!