I’m following TDT toolbox to run a searchlight, but I’d like to standardize (z-score) the image before feeding it to the searchlight. Is there any way to run a searchlight on standardized data using the TDT? Or is there a way to standardize the data?
please have a look at the help file for
decoding_scale_data. This normalizes across conditions, which is what most people do. If you would like to normalize across voxels, this is currently not implemented (but also rather unusual). You can normalize using all data (assuming there is no way this could constitute a bias), you can normalize the training data and test data separately, or you can normalize using the training data parameters and apply them to the test data.
If you like to scale your data for speed, then this is the way to go. If you would like to scale for performance, an alternative is to scale data using the residuals of the firstlevel model. This can be variance scaling or covariance scaling. Variance scaling is very similar to using t-volumes for decoding (which is pretty fast too), covariance scaling also takes the covariance between voxels into account. For that purpose, I would check the template for crossnobis distance estimation, which contains the details for this scaling procedure.