It’s the mean accuracy across cross-validation folds in the searchlight centered around that voxel.
This tells you something about the amount of discriminative information in that searchlight about your conditions of interest. Note that accuracy is not a standardized effect size measure (even though many want to treat it that way), so the absolute numbers are not necessarily that meaningful, but more the range that you get (i.e. the variance). If you run this analysis in each participant, you could for example spatially normalize your results and smooth them slightly and then run a classical t-test at the group level. If you have only this one participant and want to run a statistical test, then you could use
decoding_statistics.m. However, permutation stats on searchlight results only work for a sufficiently large number of runs if you have only one regressor per condition per run. For trialwise analyses in event-related designs, permutation tests are actually not quite valid (while quite common).
Well, when making TDT we didn’t think it makes sense to reinvent the wheel. You should have nii-files of the searchlight results, and you can use your software of choice to look at them (e.g. AFNI, SPM, or MRIcron). For vectors, you could export them to any other software package and plot them the way you prefer. We have some simple scripts, but I think you don’t need them. Just use the software you have used so far.
 (bearing in mind this is not testing the exact same hypothesis as a classical random effects test, see this paper by Allefeld et al., preprint here)