Eh, sorry – I think I was wrong and have misguided you. For spatio-temporal searchlight you would indeed still need first to prepare you “samples” to consist of multiple volumes. so to do that
eventrelated_dataset call. Without it, dataset features would not acquire any temporal information about temporal adjacency, so no spatio-temporal searchlight (at least in current implementation) would work.
Correct. The onsets you provide to eventrelated_dataset will be used to determine which volumes to take. If they are not aligned to the volume onsets, I believe
eventrelated_dataset should fail because any recent numpy would refuse to slice up the volumes starting at float indices.
Is there any particular reason why you are interested in spatio-temporal searchlighting? Remember that there is no “Free lunch” here, so if we start doing it, you would need to account for boosted multiple comparisons etc. So the easiest approach is to assume a model (well - the HRF), do the fit_event_hrf_model and use those fits to perform regular spatial searchlight.
But if you really insist to be able to do spatio-temporal with your jittered design, I guess we could still do it but it would require either sacrificing information about jitter (but if you really believe into its necessity and did simultaneous motion/slice timing correction to retain strict control over timing, then I would hate to go against this) by rounding your onsets and proceeding with that simple boxcar and applying spatio temporal searchlight to that evds, or do more evolved modeling with tent or other basis functions (instead of a single HRF per trial) so per each event you would get multiple fits at different delays, and then perform spatio-temporal analysis on them.
Hope this helps