I am a beginner user of TDT and I would like to ask a few basic questions. What kind of output should I expect? Let’s say that I put a ‘searchlight’ as the analysis type and the output will be ‘accuracy_minus_chance’. I understand that the output will reflect the performance of the classifier by an accuracy percentage. But, in this case the resulting output is a double (column made by rows of values per voxel). How can I interpret that result? What could you recommend to do as a next step? and I would like to ask, Are there some way to plot or generate figures for the results? If you could help me with these queries I would greatly appreciate it.

It’s the mean accuracy across cross-validation folds in the searchlight centered around that voxel.

This tells you something about the amount of discriminative information in that searchlight about your conditions of interest. Note that accuracy is not a standardized effect size measure (even though many want to treat it that way), so the absolute numbers are not necessarily that meaningful, but more the range that you get (i.e. the variance). If you run this analysis in each participant, you could for example spatially normalize your results and smooth them slightly and then run a classical t-test at the group level[1]. If you have only this one participant and want to run a statistical test, then you could use decoding_statistics.m. However, permutation stats on searchlight results only work for a sufficiently large number of runs if you have only one regressor per condition per run. For trialwise analyses in event-related designs, permutation tests are actually not quite valid (while quite common).

Well, when making TDT we didn’t think it makes sense to reinvent the wheel. You should have nii-files of the searchlight results, and you can use your software of choice to look at them (e.g. AFNI, SPM, or MRIcron). For vectors, you could export them to any other software package and plot them the way you prefer. We have some simple scripts, but I think you don’t need them. Just use the software you have used so far.

Best,
Martin

[1] (bearing in mind this is not testing the exact same hypothesis as a classical random effects test, see this paper by Allefeld et al., preprint here)