NiftiMasker for roi mask

Hi everyone,

I have a question regarding the NifitMasker and if I’m using it correctly. I have an fMRI image and a binary mask with the same size. The affines are not exactly the same, but very similar:

mask_affine=array([[   2. ,    0. ,    0. ,  -96.5],
                   [   0. ,    2. ,    0. , -132.5],
                   [   0. ,    0. ,    2. ,  -78.5],
                   [   0. ,    0. ,    0. ,    1. ]])

img_affine=array([[ 2., 0., 0., -96.], 
                  [ 0., 2., 0., -132.], 
                  [ 0., 0., 2., -78.], 
                  [ 0., 0., 0., 1.]]) 

I tried two different ways to get my region of interest from the fMRI image. For the first option I initialized a NiftiMasker with the mask and then fit_transformed the image and then averaged. For the other option I first averaged the slices (with image.mean_img), then got the data and flattened the image and also flattened the mask. Then I used python with np.where to only pick the pixels in the ROI.
However, the results I get from those two options are quite different. Even when I resample the image to fit the mask affine for option 2 I get different results.

I would have expected the results to be the same or very close, but they are not. I tried several other things and looked into the NifitMasker code but am at a loss, why this is happening. I would really appreciate any insight or suggestions to understand, why this is happening or which way is correct.

Here are my code examples for the 2 options, with option 1:

nifti_masker = input_data.NiftiMasker(mask_img=mask)
masked_img = nifti_masker.fit_transform(img_cleaned_smoothed)
masked_cut = masked_img[:-100,:]
option1 = np.mean(masked_cut,axis=0)

the other option:

img_sliced = image.index_img(img_cleaned_smoothed, slice(0, 100))
mean_img = image.mean_img(img_sliced)
img_flat = image.get_data(mean_img).flatten()
mask_flat = image.get_data(mask).flatten()
masked_area = np.asarray(np.where(mask_flat>=0.5)) #actually its either 1 or 0, so a bit unnecessary here
option2 = img_flat[masked_area]

Thank you in advance for any help, I appreciate any help, comments and insights.

it looks like in option 1 you use all images but the last 100, whereas in option 2 you use the first 100?

Hey! Thank you! And yes, thats right! However the image has exactly 200 slices, so even though I use all up until the last 100 (all images until -100) in the first option it should be the same as using the first 100 in this case as it is 200 slices in general. I’m sorry, I should have mentioned it or change it in the code to be consistent and reusable for more cases!

note that there is no need to use np.where:

masked_area = mask_flat>=0.5 #actually its either 1 or 0, so a bit unnecessary here

works as well.