Task not optimized for MVPA - is analysis even possible?

Hello all!!
Our lab has an event-related fMRI task which was run, but the univariate results are uninteresting. We’re attempting MVPA, but are running into issues.

Here is our task:
-2 s TR
-2 s stim presentation (not progressed by participant response) followed by 0.75 s feedback, then 0-6 s between trials

Most MVPA packages seem to want everything parceled by volume i.e. each volume contains one discrete event. Clearly, our data do not fit so neatly into TR bins.

Our task was not designed with MVPA in mind, so the big question we’re hoping can be answered is 1) is MVPA even possible with our design, and 2) if so, do you have any advice as to how to proceed (e.g. papers, software packages, anything! we’re desperate!!).

Honestly, just knowing the answer to the first question would be a big help. Nobody in our lab has any experience with MVPA, so we’re all brand new to this! Looking at various software packages is pretty daunting too…

Anyway, we would really appreciate your help. Thanks for your time!
Kade

Have you considered trial-level modeling like LSS (Mumford et al., 2012; Turner et al., 2012)? There is a BIDS App that does LSS called NiBetaSeries that you could try.

The idea is to use a series of GLMs to model each trial separately. You then collect the trial-specific contrast maps across GLMs to build your “beta series”.

2 Likes

@tsalo Those are very helpful papers, thanks for the tip! We’ll have to check out that app you recommended as well.

So if I’m understanding correctly, the beta series contrast maps can be thought of sort of like volumes (at least in regard to how you might treat them in an MVPA analysis)?

So if I’m understanding correctly, the beta series contrast maps can be thought of sort of like volumes (at least in regard to how you might treat them in an MVPA analysis)?

Exactly! I’ve only ever used LSS for studying task-based functional connectivity in rapid event-related designs, but from what I recall it was developed specifically with MVPA in mind.

Okay, thank you very much for the help! It is greatly appreciated.

Were the events you wanted to analyze presented in random order, and at roughly equal numbers in each run (this per run part is most relevant for single-subject analyses)? If so, analysis is likely possible, whether with a GLM-type method of temporal compression (as already suggested) or some other (e.g., analyzing single frames or averages of a few frames per event). But if there’s some sort of order to the conditions (e.g., one type of trial always occurs a few seconds before the other), or major imbalance (e.g., 10 trials of one class but 100 of the other), analysis may not be practical.

@jaetzel The task was a two-category learning task with the between-subjects manipulation being whether or not the response buttons for each category were fixed (button 1 = category A) or varied randomly (button 1 = category A/B). Events were presented randomly, in a pattern unique to each participant. Each participant was also tested on an equal amount of stimuli from each category. I forgot to mention that we collected our data in only two runs…

Both runs were slightly different. The first run presented 150 trials using whatever button mapping they were trained on, while the second presented 208 trials in 8-trial mini-blocks that alternated between fixed and random mappings.

We figured we would have to treat each run as a separate task and either use mini-blocks in the place of multiple runs (similar to how the second run is setup) or work at the group level. Honestly, we know it’s not ideal, but we don’t want to give up without exhausting all of our options! (p.s. love your blog by the way)

Yes, this does sound feasible. If a group analysis is sensible (train on some people, test on others) that might help the cross-validation. Otherwise, yes, using mini-blocks within the single run for cross-validation sounds reasonable; if possible, make the blocks grouped in time (e.g., train on the first 4/10 of the run, test on the last 4/10, reverse). (and thanks about the blog. :blush: I haven’t posted nearly as much as I’d like lately!)

Wonderful! That’s great to hear. I’m not sure if we’ve wholly decided on anything yet, but it’s good to know we have options! And thanks for the suggestion on grouping by time, as well as for your help in general. We definitely appreciate it!!

I wish I had known this a couple years ago. I ended up redoing a task following a block paradigm, after realizing the database I had been given for MVPA analysis used short events; among other issues

1 Like