TDT cross validation design


#1

Hello, I want to run an experiment comprising 3 or 4 runs. Separate runs are just a way to allow subjects to take some rest, but the conditions are the same in all runs. There are, say, 5 conditions (eg words, faces, tools…), which are organized in short homogeneous blocks (e.g. 10 faces, 1/second, ie 10 seconds per block) . In other words, runs consist in the random alternation of 5 types of short blocks. Let’s assume that I want to decode faces from tools. I speculate that the best CV design would be:
(1) to create a GLM with 1 regressor per block + 1 regressor per run
(2) on each cv step, leave out one random “faces” block and one random “tools” block, just ignoring the distinction between runs.
My questions are:

  • Am I correct?
  • Would the decoding be easy to implement using TDT?
    Thank you very much in advance!
    LC

#2

Sorry for the late reply, please add the tag “tdt” in the future if you would like faster replies!

The short answer is: whenever possible you want to run a leave-one-run-out cross-validation to ensure independence of training and test data and to make sure the results are not biased (positively or negatively) by a confound, in your case run.

I will (hopefully next week) post some of my lectures online that will include a lecture on designing MVPA studies that explains this in more detail.

General recommendations (these are just rules of thumb):

  • more runs, ideally 8-10, avoid less than 4-6
  • repeat all conditions in all runs, and have the same number of blocks per condition in each run
  • potentially add a little bit of time between blocks to avoid carryover effects
  • model either effects per block or per run but not both (unless you think of the constant effect as a nuisance variable, which here in my opinion isn’t the case)

Then, get beta maps, one per condition per run (or one per condition per block per run) and carry out leave-one-run-out crossvalidation, which is super easy to implement in TDT (if you use SPM or AFNI even with one line of code) :slight_smile:

Best,
Martin


#3

Hey @ulysses & @Martin,

in case you don’t know it already, this might be an interesting read:
Assessing and tuning brain decoders: cross-validation, caveats, and guidelines by Gael et friends.
As you can tell by the name of the paper, it also includes a super neat and comprehensive
assessment of different CV strategies.
Note: that’s the link to the preprint version on arxiv. The final version was published in neuroimage.

HTH, best, Peer