Looking for a Searchlight function with customizable options

Hi all,

I want to apply the searchlight method to my fMRI data. In particular, I’d like to search the whole brain for a volume of voxels in which the activity pattern is most dissimilar between two conditions. That is, what I really want to do is use a searchlight method on given brain volume data (e.g. in my case t-values for the activity in each voxel) and within each iteration of the volume sphere assess the similarity of the activity pattern for two conditions.

I have looked at nilearn’s Searchlight function because I use it heavily in my other analyses. However, it seems that this function does not allow for the actual “searchlighting” to be hijacked and to use it with a custom function instead of a decoding object (i.e. it appears you can only do searchlight analyses in conjunction with decoding). Is that correct or is there a workaround?

If nilearn is not an option, is there another (python) package that has a function to search 3D/4D brain volumes but in which I can determine myself what happens whenever a sphere volume has been selected?

Thanks in advance for any help!

Cheers,
Michl

Hi Michl,
I’m not sure what you want to do: if you want to assess how dissimilar he pattern of activity is between two consitions, taking the classification accuracy of a classifer that tries to discriminate between the two conditions is a reasonable, albeit indirect answer.
What other metric would you like to use ?
Best,
Bertrand

1 Like

Hi Bertrand,

thanks for the reply! Great work with nilearn by the way - real game changer for me. I am loving it!!

I think my previous post was misleading. I probably need to go more into detail.

The situation is as follows:
I have deconvolved brain activity (i.e., for each voxel in my ROI a t-value, specifically I am using nideconv: https://nideconv.readthedocs.io/en/latest/ for the deconvolution) and want to search this entire volume using the searchlight approach.

Now, I actually want to do two things and how I understand it is that I need access to the input of each iteration of the volume sphere or at least the option to use my own custom scripts/methods on each iteration:

  1. I wanna test whether the pattern information in one condition is similarly represented than the pattern information from another condition. A more specific example is: say I show participants objects from three different object categories (e.g. houses, faces, animals) in two different conditions (e.g. focused attention and divided attention). Now, I want to train the classifier to discriminate between the object category information (houses, faces, animals) in one condition (i.e. focused attention) and test the performance of the classifier in the other condition (i.e., divided attention).

  2. Alternatively or additional to 1), I want to create representation dissimiliarity matrices (see Kriegeskorte et al.) comprising the pattern information for each combination of the object categories and attention conditions based on the voxel activity pattern in that sphere.

From my understanding neither aproach is possible in nilearn without “hacking” into the functions (i.e. searchlight function) somehow or is that incorrect? I reckoned the actual hacking is not extremely straightforward so I was looking for searchlight algorithms that allow for a more easy access but maybe that is not necessary.

Best,
Michl

In https://brainiak.org/ you can write your own kernel function to be run at each voxel.