I know that this is resurrecting an old thread, but I am working with some very large 7T data (~400k voxels in the mask) and have written a couple of implementations that might I thought might be helpful to others. Obviously they’re written for me rather than general use, and as such contain some absolute paths, but this should be easy to deal with.
The first is a simple wrapper to use a Matlab parfor to parallelise by voxel as Martin suggests, then recombine and save at the end. Write your configuration script in the normal way, making sure to load your residuals into a misc structure, but then when you would call:
results = decoding(cfg,[],misc);
instead call
pool = gcp(‘nocreate’);
num_workers = pool.NumWorkers;
num_searchlights = size(misc.residuals,2);
searchlights_per_worker = ceil(num_searchlights/num_workers); % Divide the task up into the number of workers
parfor crun = 1:num_workers
results{crun} = decoding_parallel_wrapper(cfg,misc,searchlights_per_worker,crun)
end
all_results = results{1};
for crun = 2:num_workers
all_results.decoding_subindex = [all_results.decoding_subindex; results{crun}.decoding_subindex];
all_results.other_average.output(results{crun}.decoding_subindex) = results{crun}.other_average.output(results{crun}.decoding_subindex);
end
results = all_results;
disp(‘Crossnobis on the whole brain complete, saving results, note this could take some time’)
save(fullfile(cfg.results.dir,‘res_other_average.mat’),‘results’,’-v7.3’)
assert(sum(cellfun(@isempty,all_results.other_average.output))==0,‘Results Output not completely filled despite completion of the parallel loop - please check’)
delete(fullfile(cfg.results.dir,‘parallel_loop*.mat’))
The decoding_parallel_wrapper function can be found here (7T_pilot_analysis/decoding_parallel_wrapper.m at master · thomascope/7T_pilot_analysis · GitHub). It is a very simple wrapper that sets the searchlight bounds and temporary resultsname for saving.
function results = decoding_parallel_wrapper(cfg,misc,searchlights_per_worker,worker_number)
cfg.searchlight.subset = ((worker_number-1)searchlights_per_worker)+1:worker_numbersearchlights_per_worker;
cfg.results.resultsname = cellstr([‘parallel_loop_’ num2str(worker_number)]);
addpath /group/language/data/thomascope/spm12_fil_r6906/ % Your SPM path for the workers
spm(‘ver’); % Needed or sometimes the decoding toolbox complains in parallel that SPM is not initialised.
results = decoding(cfg,[],misc);
The second is a method for downsampling your searchlight space, without downsampling or losing the input data, and producing a .nii file of the correct dimensions at the end. The resulting searchlight volume is of lower resolution by the downsampling factor, but the output data are significantly smaller and the decoding runs significantly more quickly. A downsampling factor of 2 results in 8 times fewer searchlight locations.
The script can be found here 7T_pilot_analysis/TDTCrossnobisAnalysis_1Subj.m at master · thomascope/7T_pilot_analysis · GitHub and can be put within a parfor of its own to be parallelised by subject. It includes an example script to make a simple effect map.