Extracting voxels within a mask for GLM in nilearn

Hi everyone,
I want to fit GLM to a specific set of voxels rather than the whole brain. For this purpose, I created a mask using NiftiMasker class from nilearn. My code is like the following:

mask_img= nib.load(‘mask_img.nii’)
func_data= nib.load(‘func_data.nii’)
masker= NiftiMasker(mask_img, target_affine= func_data.affine, target_shape=
func_data.get_fdata()[:,:,:,1]
masker.fit()
masked_data= masker.inverse_transform(masker.transform_single_imgs(func_data))

This way, I get a nifti image with the exact dimension of my func_data and only the voxels within the mask. Am I doing it right? Is there any other way to extract the voxels within a mask rather than transforming and inverse transforming?

Any input would be highly appreciated! Thanks!

AFAICT you’re doing it right.
But, why not simply provide your selective mask together with the fMRI data to the GLM model ? It should be able to handle it. Let me know if you encounter an issue when doing so.
HTH,
Bertrand

Hi Bertrand,
Thanks for your confirmation! Actually, I don’t know how to include a mask with fMRI data in GLM. Could you please share an example in nilearn?
Regards,
Nahid

Hi @Nhasan ,

If I understand your question, I’d recommend looking at this example in the docs !

You can substitute your mask of interest for mask_img=data['mask'].

HTH,

Elizabeth

1 Like