I am planning an analysis and would like to hear from the hivemind whether it is a valid method, or too complicated.
Let’s say I do a searchlight similarity analysis for item-specific memory reinstatement during task A - based on training data from task B. Now I was thinking why not do this at each volume of task A before any first level analysis - in a sort of sliding time window approach. I read somewhere that this including the time domain may be beneficial. It would also provide me with a time course of decoding at each TR.
As far as I can remember they often only did it with a categorical decoding accuracy. The output of that would be a series of similarity maps, as many as there are volumes in the functional run. One could then pass this on to first and second level analysis, right? Just like one would do with univariate BOLD.
However, my analysis should be item-specific. The problem with that however, is that this procedure would create a large number of similarity-volumes/maps, as I would have to repeat the similarity measure for each item. Meaning I get t-volume by n-items similarity maps. How could I pass those on to further analysis?
If I average these maps into one map per TR then I would drown areas of potentially high similarity for item n in the noise from other similarity maps because other items at that same time point would yield a low or - depending on my task - even a negative similarity score.
Could I somehow include all t by n similarity maps in a first level GLM?
Or should I just abandon this whole thing and do my similarity analysis with the betas from an LSS first level GLM like a normal person?