Dear Martin and all,
I would like to maximize the sensitivity of my decoding analysis pipeline.
I have four fMRI blocks/runs per participant. Each run contains 60 trials, 30 trials of stimulus A, 30 trials of stimulus B. In total 240 stimulus trials per participant. We used sparse fMRI for auditory stimuli a TR of 3000ms and a TA of 1000ms. Participants had to indicate after each trial if they perceived A or B. In a first step of the analysis, I would like to decode stimulus identity. The stimuli were ambiguous and we would like to decode in a second step the response/decision made by the participant, in order to differeniate the areas required for decision making form the areas implicated in sensory stimulus processing.
In an earlier analysis on another dataset, I used a first level design including one beta weight per stimulus trial (total 48 stimulus trials per participant) as predictor.
Now, I got 240 trials per participant (cross-validation design with two folds) and it takes almost 7 hours to compute the accuracy map of a single participant. I forsee that permutation testing will take too much computation time.
I wanted to ask you for your adivce. How many beta weights would you define? How would you group/chunk the stimuli trials for the betas?
I just read
Sohoglu, Ediz, Sukhbinder Kumar, Maria Chait, and Timothy D. Griffiths. “Multivoxel Codes for Representing and Integrating Acoustic Features in Human Cortex.” BioRxiv, August 9, 2019, 730234. https://doi.org/10.1101/730234.
And I was wondering if their approach using cross-validated multivariate analysis of variance (Allefeld and Haynes, 2014) would be the better more approriate to address my classification problems? Also looking maybe at the interaction between stimulus identity x decision?
Many thanks for your advice