Nilearn: Betas image as input for decoding analysis

Hi everyone,

this is a very beginner question, sorry for that…

I started to use nilearn for running a decoding analysis (MVPA). We chose to use Betas obtained from a 1st level GLM, but is it possible and HOW can we feed one of the decoding nilearn function with these betas instead of 4D images (e.g., raw bold)? I am looking for a way to do that but I have to say that it was not very clear to me…

If someone can help me, it would be very nice!



Hi Mathieu,

You can provide NiftiMasker a list of 3D images, and it will extract out voxels from your region of interest just like it would for a 4D image:

from nilearn.input_data import NiftiMasker

imgs = ['beta_1.nii', 'beta2.nii', 'beta3.nii']
region_mask = 'some_mask_img.nii'
masker = NiftiMasker(region_mask)
region_data = masker.fit_transform(imgs)

region_data gives you a voxel-by-observation (in this case, images/betas) array – your feature matrix, X. You can have a separate label vector, Y, corresponding to the conditions for each beta image (e.g., [0, 1, 0]). From there you have what you need for classification in scikit-learn or some of nilearn’s functions.


Thank you very much Dan, that was very helpful!


1 Like