PyMVPA Multi-dimensional Searchlights

Hi All, I have trouble doing event-related searchlight analysis and would appreciate it if anyone could help! As you can see from this page ( searchlight is applied on a dataset named evds which is generated in this way:

evds = eventrelated_dataset(ds, events=events)

“ds” is our dataset which consists of brain volumes (or samples) and “events” contains information about onsets, durations, and condition labels of our events. What the function “eventrelated_dataset” does is it segments the original time series dataset (ds) into event-related samples. As I understand, it means that it extracts (multiple) consecutive samples for each event; so eventually there will still be brain volumes (samples) in evds and not exact event information. This might be reasonable when events are synchronized with TR, but not for cases that include jitter. Is that right? If yes, is there any other function that I can use? For example, can I use the output of my “fit_event_hrf_model” function (it is perfectly fine in this function to have events that are not synchronized with the TR) for this purpose? Thank you!


you do not necessarily need to “boxcar” the dataset to do “spatio temporal” (or any other combination of dimensions/grouppings) searchlights. Please have a peek into the slides I prepared for an informal explanation a few years back. If still not that – let me know and may be we boil down the code for your particular usecase

1 Like

Thank you so much for your response. The slides are very helpful. So, simply doing (res = sl(ds)) rather than (evds = eventrelated_dataset(ds, events=events) and then (res = sl(evds)) would give us the same results? Then why in the tutorial it is suggested to do the latter and segment the dataset based on the events, then use that new event-related dataset as the input to sl?

My second question is if using segmented dataset gives us better results, can we also use the output of fit_event_hrf_model rather than eventrelated_dataset? I’m asking because since we have jittering in our experiment, some information will be lost if we look at volumes (samples) only.

Just to add to my previous post, as we see under “From Timeseries To Spatio-temporal Samples” in the Event-related Data Analysis page it is said that “… The next and most important step is to actually segment the original time series dataset into event-related samples.” which is followed by the related line of code (evds = eventrelated_dataset(ds, events=events)). This is exactly what is done later in Multi-dimensional Searchlights ("… First let’s re-create the dataset with the spatio-temporal features from Event-related Data Analysis")

Eh, sorry – I think I was wrong and have misguided you. For spatio-temporal searchlight you would indeed still need first to prepare you “samples” to consist of multiple volumes. so to do that eventrelated_dataset call. Without it, dataset features would not acquire any temporal information about temporal adjacency, so no spatio-temporal searchlight (at least in current implementation) would work.

Correct. The onsets you provide to eventrelated_dataset will be used to determine which volumes to take. If they are not aligned to the volume onsets, I believe eventrelated_dataset should fail because any recent numpy would refuse to slice up the volumes starting at float indices.

Is there any particular reason why you are interested in spatio-temporal searchlighting? Remember that there is no “Free lunch” here, so if we start doing it, you would need to account for boosted multiple comparisons etc. So the easiest approach is to assume a model (well - the HRF), do the fit_event_hrf_model and use those fits to perform regular spatial searchlight.

But if you really insist to be able to do spatio-temporal with your jittered design, I guess we could still do it but it would require either sacrificing information about jitter (but if you really believe into its necessity and did simultaneous motion/slice timing correction to retain strict control over timing, then I would hate to go against this) by rounding your onsets and proceeding with that simple boxcar and applying spatio temporal searchlight to that evds, or do more evolved modeling with tent or other basis functions (instead of a single HRF per trial) so per each event you would get multiple fits at different delays, and then perform spatio-temporal analysis on them.

Hope this helps

1 Like

Thank you again so much for your help. I think I’ll do regular spatial searchlight for now; wanted to do more accurate analysis by doing spatio-temporal. (1) So, just to make sure and double check: rather than using volumes, I can use the fits from fit_event_hrf_model for spatial searchlight? :

evds = fit_event_hrf_model(fds, events, …
sl = sphere_searchlight(cv, radius=3, postproc=mean_sample())
res = sl(evds)

(2) If (1) is correct, which of the following should be done when I want to project searchlight results back to the fMRI volume :

map2nifti(fds or evds?, 1.0 - sphere_errors).to_filename(‘sl.nii.gz’)

(3) This question is not directly related to searchlight, but it is a similar issue when doing cross-validation on different ROIs. For event-related sensitivity analysis, it is suggested to do the following as I mentioned before :

sclf = SplitClassifier(LinearCSVMC(), enable_ca=[‘stats’])
sensana = sclf.get_sensitivity_analyzer()
sens = sensana(segmented_ds) # segmented_ds has samples which consist of multiple volumes

Again, we have the same issue here for segmenting the dataset. My question is can we, in a similar way that you suggested for searchlight, use fits from fit_event_hrf_model and do this : sens = sensana(evds)? The reason that I want to do sensitivity analysis is because besides looking at accuracies and confusion matrices, I’m interested in projecting classification weights back to the fMRI volume and visualize patterns.