Leave one run out vs. leave one chunk out within a run

Hello,

I have been told that doing MVPA classification within a run (ie. chunking the same type of regressors within a run and doing cross validation by setting those chunks as training and testing instead of using individual runs as training and testing sets) is a bad practice?

May I ask why that is not recommended and if possible can someone please point me to some studies discussing this issue?

1 Like

A short answer is temporal dependencies: volumes within a run are generally more similar to each other than volumes from different runs. The papers listed in the “literature” section of
MVPA Meanderings: where to start with MVPA? give some background. (And yeah, that post really needs updating!)

4 Likes

Dear Jo,

Thank you so much for your answer. Speaking of temporal dependencies, I am curious if one can eliminate these temporal effects by evenly distributing the occurrences of their regressors across a single run, so that the regressors do not occur close at each other. So each regressor occurs within the run, and is not limited to the beginning, middle or the end of the run. But rather the regressors could be sampled from any point in the run.

Hi,
Usually in GLM analyses, you have other regressors (motion, drifts, nuisance), that create correlations between the effects estimates, whether those correspond to close trials or not. Hence you cannot consider that two beta or stat maps stemming from one run are independent and thus cannot take some of them as an independent test set.
Best,
Bertrand Thirion

4 Likes