In a decoding analysis I normally have n voxels (using a ROI with n voxels) from a single beta file (A) for k data points. However, in my current scenario, I’d like to extend the feature space to two beta maps (A and B), resulting in 2*n features per data point. A and B model different components of each trial, which is why I cannot model them as a single regressor. Yet, for the classification I want to incorporate information from A and B together. How can I assign two beta maps to each data point in TDT?

My current workaround is a bit wonky: I simply created a “beta_combined.nii” where I combine [A; B] into one matrix. And then I do the same with my masks, doubling the mask adjacant to itself. It seems to work, but is there a more elegant way? Also, my approach would be incompatible with a searchlight analysis.

Interesting, this is a scenario we hadn’t considered. I actually like your approach. I was planning on writing code for custom searchlight shapes which would then allow this to work. Since I haven’t implemented that yet I think you might still need to hack the function that localizes the mask for the current searchlight. This code assumes you concatenated brains in the first dimension.

This will expand the searchlight to the second half of the volume, which is your second volume. Since this code would only work for half of the searchlights, you would need to limit the number of searchlights you run.

cfg.searchlight.subset = (1:n_voxel)'; where n_voxel is the number of searchlights in your original mask.

Finally, I would at least initially turn the visualization for searchlights on to see if it worked or if the volumes would have to be concatenated differently.