# My searchlight RSA runs suspiciously fast...faulty code or tiny data? [python]

My question: Is my code doing what I think it’s doing, and is it, therefore, normal to analyze fMRI data like mine this quickly?

My impression, from the literature/forums/etc., is that a searchlight RSA should take a significant amount of time to run. However, I’m running a searchlight RSA for one subject in ~5 minutes. Part of me suspects it’s just the nature of my data, or possibly that the toolbox I’m using [Brainiak] is just well optimized for searchlight analyses, but the other part of me can’t help but wonder if I may have done something incorrectly since I’ve literally never run this type of analysis before. Below are details about my code (in case I’m to blame), followed by info about my data (in case they’re “to blame”). Any feedback anybody has would be welcome! Even if you can’t speak to my situation and can only share your own experience with this type of analysis, that would be helpful

The code:
I’m using Brainiak to run the searchlight, and NumPy/SciPy for the RSA. In short, Brainiak has you define a function it will run on each searchlight cluster. If I’m to blame, I believe this is where the error would be. Here’s some snippets and a comment or two describing what I believe they do. This is where I would really appreciate some feedback since, as I mentioned, I’ve never done this before! My plan was to attempt to follow the methods of the original Kriegeskorte et al., (2008) paper.

“searchlight_cluster_bold_data” are a “volume x voxel” 2D array
`similarity_matrix = numpy.corrcoef(searchlight_cluster_bold_data)`

1 - similarity matrix to turn the RSM into an RDM
`dissimilarity_matrix = 1 - similarity matrix`

Spearman rank order correlation comparing neural RDM to a pre-defined model matrix
`RSA, _ = scipy.stats.spearmanr(dissimilarity_matrix, model_matrix)`

Then the values are subtracted from 1 to give a dissimilarity value
`RSA = 1 - RSA`

There’s some other steps, but I doubt they’re specifically to blame or add much to the time. These involve how I handle voxels outside the mask, some rounding, and the removal of the diagonals for the Spearman correlation.

The data:
It’s totally possible that my dataset is just small! For example, even though my whole brain mask has ~90k voxels, I’m only processing 20 “volumes” per person instead of the hundreds that are sometimes present in fMRI studies per session.

This is because I’m running the RSA on averaged t-maps, based on single trial GLMs, for each of my study conditions (i.e., all single trial GLM beta maps of a given trial type are averaged together across runs). Twenty conditions = twenty t-maps. Each searchlight cluster, then, only has a “20 x voxels” size array it has to work with (the searchlight radius is 5 voxels wide at its thickest and diamond shaped), which reduces down to only a 20x20 RDM.

Again, it could be the “fault” of my data that the analysis is so fast - I just don’t have a frame of reference for this!

If you’ve made it this far (or ideally skimmed around a bit), then I thank you for your time! It’s communities like these that help ECR’s like myself, and others, to thrive, and I can’t thank you enough for it, truly.

MANAGED BY INCF