Best practice searchlight analysis

Dear all,

I created a script to perform a searchlight analysis on my fMRI dataset, but I had difficulty picking the most relevant analyses.
I read Etzel et al. (2013) paper who argued for: (a) doing a searchlight analysis varying the searchlight size, the kernel and the test statistics ; (b) doing a ROI (preferably on a new dataset) in the region of the resulted searchlight to check if all voxels are relevant; (c ) doing a ‘lesioning study’ i.e. looking at the decoding accuracy when the region identified by the searchlight is removed. All these options seem interesting to me, but I think I will run out of time if I apply all of them… Which one do you think I should best focus on?
I also read Stelzer et al. (2013) who argued for using permutation-analysis on each subject and then applying a bootstrap procedure for group-level inferences. The thing is that I currently plan to perform a leave-one-subject-out cross-validation procedure on my dataset given that there were too many labels and too few datapoints for each subject individually to run a searchlight (I for instance have only 10 occurences of one specific label in one run for some subjects, so doing a leave-one-run-out CV would have been difficult). I would like to do a permutation analysis, but I do not know how I should proceed. Would it be better to change my plans and apply a subject-level searchlight, or can I do something similar on the group level?

Your help is greatly appreciated :slight_smile: !

So much depends on what exactly the purpose of the analysis is … the advice in the 2013 paper is intended to reduce the chance of running into some especially awful “surprises” that can come from group searchlight analysis (e.g., that the area appearing most informative is actually not if tested as a ROI).

Could analysis in ROIs be possible in your case? Even if you want to cover the entire cortex, analysis in each of 400 parcels (the Schaefer2018 400 is my current default) is far more tractable.