I have a whole-brain fmri data and two masks.
I want to apply both of them, so I will get every voxels that is in one of these masks (essentially an OR condition).
Meaning , to unify them.
Is this possible?
I have a whole-brain fmri data and two masks.
I want to apply both of them, so I will get every voxels that is in one of these masks (essentially an OR condition).
Meaning , to unify them.
Is this possible?
Hi @slab15
I think nilearn.image.math_img
is what you’re looking for.
Here is a small toy example to show how it works:
import numpy as np
import nibabel as nib
from nilearn import image
data1 = np.array([[1, 0, 1],
[0, 0, 0],
[1, 1, 1]])
mask1 = nib.Nifti1Image(data1, affine=np.eye(4))
data2 = np.array([[0, 0, 1],
[1, 0, 0],
[1, 1, 1]])
mask2 = nib.Nifti1Image(data2, affine=np.eye(4))
mask = image.math_img("np.logical_or(img1, img2)", img1=mask1, img2=mask2)
mask._dataobj
array([[1, 0, 1],
[1, 0, 0],
[1, 1, 1]], dtype=int8)
Nicolas
@NicolasGensollen Thanks, I currently use the mask like this:
data = NiftiMasker(mask_img= mask)
So it there any way to do so directly from NiftyMasker?
Hi @slab15 –
You’ll need to run both the image.math
command that @NicolasGensollen described above and then the NiftiMasker
command.
So to pull from @NicolasGensollen 's example above, it would look like
mask = image.math_img("np.logical_or(img1, img2)", img1=mask1, img2=mask2)
NiftiMasker(mask_img=mask)
HTH,
Elizabeth
@emdupre @NicolasGensollen Sorry, I wasn’t clear.
NiftiMasker gets a path of the .nii.gz file.
When I put it in image.math_img , I get an error:
ValueError: ("Input images cannot be compared, you provided 'dict_values(['/mask1.nii.GZ', '/mask2.nii.gz'])',", 'Following field of view errors were detected:\n- img_#0 and img_#1 do not have the same shape\n- img_#0 and img_#1 do not have the same affine')
Any idea what may cause it?
Hi @slab15 –
Sorry, for clarity, can you share the exact command(s) you’re running ?
Otherwise, it seems that you’re getting an error when trying to take the logical_or
of both masks since they can’t be compared directly. The specific error seems to be that they do not have the same affine. If you’re unfamiliar with affines, you can read more about them in the Nibabel documentation, but broadly, it means that your two masks aren’t on the same grid and so their overlap is ill-defined.
You can confirm this running something like:
import nibabel as nib
img1 = nib.load('mask1.nii.gz')
img2 = nib.load('mask2.nii.gz')
print(img1.affine)
print(img2.affine)
If the two masks were defined in the same space (e.g. a subject’s native space) or have been normalized into the same space (e.g., MNI space), you can resample them so they are comparable. This could be done using something like nilearn.image.resample_img
as follow:
import nibabel as nib
from nilearn.image import resample_img
target_affine = nib.load('mask1.nii.gz').affine
resampled_mask2 = resample_img(mask2, target_affine=target_affine)
You can then check the resampled affine and re-try the image.math_img
call.
HTH,
Elizabeth
The problem was indeed that the images was with different affine.
Took masks with the same affine and this is working now,
Thanks!
MANAGED BY INCF