I had a question about TDT. I used this toolbox to run a RSA searchlight analysis between neural and behavioral RDMs but had a question regarding statistical methods. Please excuse my lack of statistical knowledge…
I was wondering what statistical method would be valid at the single subject level? I would like to have a map output that only contains significant correlations.
That is tricky. You would need to run a randomization test (also known as Mantel test). You would take the behavioral similarity matrix and permute the labels, e.g.
n_obj = 40; % assuming 40 objects
randvec = randperm(n_obj);
simmat_perm = simmat(randvec,randvec);
Then you would re-run the analysis. You would then repeat this at least 1,000 times, but as often as possible. Then you would get 1 minus the percentile at which your correlation lies. Say it’s the 0.99 percentile (i.e. the 99%), then the p-value is 0.01. But that’s not corrected for multiple comparisons! You can download the Matlab function fdr_bh and use it with the dependency option to get FDR-corrected results. Alternatively, you can run cluster-based correction, but that’s more involved and not implemented in TDT.
Since this analysis is very time consuming (probably taking 2-7 days), you could also approximate the null distribution by using the values across searchlights. You just need to be explicit about this in your publication, and the results are likely going to be similar. Essentially, you could use the
subset option in TDT and only run a random subset in each iteration and then pool across all voxel results to form a null distribution. I guess you could get away with a subset of 1,000 searchlights and run e.g. 5,000 permutations quite fast. In the end, what you would do is take these results, combine them together in this case to form 5 million searchlight results and again determine the percentile of your results, again using e.g. FDR correction.
I know this sounds a little involved, sorry about that!
Hope this helps.
Thanks so much for your response! We ended up going with your first suggestion and have successfully been able to get the p-values in a vector form. I have a follow-up question:
When running the searchlight normally we get a nifti file output that is a brain projection of correlation values that I can overlay onto a T1 in MRIcroGL. Is there a function in TDT to that will allow me to take these p-values and plot them as a brain projection (nifti file)? It isn’t automatically saving as an output like the it normally does.
Thank you for your help!
Great to hear about your progress!
This is a recurring topic - how to convert a vector of numbers back to a matrix and to write it as an image, and I thought I had at some point written a function that does this for you. There is an easy fix but it’s not quite general.
I just wrote a script that makes use of
decoding_write_results should work with SPM and AFNI, and works kind of automatic. rather than posting it here (and it possibly being wrong or outdated), I would like you to try it. We could then make it part of the utilities in TDT if it works for your purposes, too.
What it does is it takes your vector of p-values, converts them to z-values, and writes both p-values and z-values as a matrix. To know your critical z-value for thresholding, convert your critical p-value to a z-value using
1-norminv(p) or if you don’t have the statistics toolbox use
sqrt(2) * erfcinv(2*p)
Please shoot me an email and I’ll send the script over to you.
This would be great! I really appreciate you sharing this with our lab! I have just sent you an email.