spm_get_data is a very useful function in SPM that uses compiled routines to quickly load time series data for a set of voxel indices into a matrix. I use it all the time for ROI analysis, and I really miss it in Python. Is there something comparable? As far as I can tell, most packages (e.g., nitime, nilearn) load the entire 4D time series into memory and then index the resulting array, which won’t be as fast, and risks memory issues for large datasets.
nibabel supports loading slices of datasets by indexing the dataobj, but that approach doesn’t work with fancy indexing. I suppose I could work out what slices would cover my ROI coordinates, load that sub-matrix and then index it, but that’s a lot of trouble and risks negating any performance gains.
Did you find an answer to your question? I am currently seeking the same, implemented in either fsl, ants or freesurfer, that I can call through nipype.
Thanks for that. To clarify, spm_get_data is a compiled routine that takes an spm_vol struct that references a set of n_vol 3D or 4D nifti volumes (basically, a nifti header) and a set of n voxel indices [xyz, n] array, and returns a [n_vol, n] array of time courses. You can see it in use here:
Without knowing the underlying C code, it seems to achieve this without loading the full 4D nifti into memory, and so is far more performant than the alternative route (which in SPM land would be spm_read_vols to load the full 4D matrix into memory followed by indexing in Matlab).