TDT: Single run issue


I am running into the issue mentioned here: TDT: Decoding having only one run

And I had made a response here, but nobody responded so I decided to create the topic with the appropriate tag and ask again incase it may not have gathered attention.

I have one run, and I have 2 regressors, which are divided to 4 chunks. I have enough repetitions (20 or so per chunk), so that should be fine. I believe I should be using the unbalanced data script. I tried one of the methods explained in that file but the original error remains.

P.S. Plus, a small portion of my participants have 3 runs. So I am curious if it’s possible to perform searchlight on all my participants at the same time (both 3 run and single run participants)? Or shall I do that seperately?

Hi Tamer,

Apologies for not responding earlier. Sometimes things are quite busy, and with issues that I’m assuming are a little easier to solve I might not respond immediately. Check out the decoding_tutorial.m it really explains everything line by line!

Essentially, TDT automatically loads in the data for you and then automatically sets up the desired cross-validation, using a leave-one-run-out cross-validation with all estimated betas per condition per run. Now in your case you have one run, but if you used SPM, the data should be loaded just fine, while you would have to change the cross-validation (assuming you have more than one regressor per condition per run). You would need to change the cross-validation because you don’t have several runs anymore. You could, however, as a trick also just change the information you got from getting your cfg.files after your script is calling design_from_spm. When you change the chunks part from all 1 to however you would like to chunk your data, you would trick TDT into thinking you are dealing with several separate runs, and regular “leave-one-chunk-out” CV should work.

Now, it is generally possible TDT still keeps track of the run number. It would not allow you to (inadvertedly) do cross-validation within run (since it is not recommended) but if you would want to do it anyway, TDT will throw an error and tell you if you really want it you just need to set a flag. I hope this to be rather straightforward once you are there, but do reach out in case you cannot resolve the issues!

Best wishes,

Dear Martin, thank you for your reply! I will check the scripts you have directed me to.

It would not allow you to (inadvertedly) do cross-validation within run (since it is not recommended)

May I ask why this is not recommended? Just to make sure, when you refer to “CV within a run” do you also mean doing CV within a run which is divided into several chunks? For example, let’s say I am presenting faces and hands and both of these conditions occur within a run, I have two chunks for hands and faces. I take one chunk from hands and train, and use the other chunk to test. And the chunks are coming from the same run.

Is this not recommended? And why is it so? Are there any papers which tackle this issue and explain why it is not recommended? Knowing the reason would be very valuable to us.

Sorry, I missed this! It has to do with non-independence of neighboring data points, which is quite difficult to take into account. For block designs it’s probably fine.

Check out this work by Jeanette Mumford.