Using a probabilistic mask with nilearn's NiftiMasker function

Hi,

I’m trying to use nilearn’s NiftiMasker function to extract gray matter.

I’ve run into variable degrees of compatibility with a probabilistic mask. Masking to generate reports seems fine. But when I try to save the masked data, I get an error stating that the mask isn’t binary:

       nii_masker = NiftiMasker(mask_img=graymatter_mask, memory="nilearn_cache",
                         memory_level=2, reports=1, t_r=TR)
        
        nii_masker.fit(epi) # this is fine
        report = nii_masker.generate_report() # no problems with reports, masking appears reasonable

        epi_masked_2d = nii_masker.transform(epi) # here, nilearn complains that the mask isn't binary

Does anyone know of any workarounds, short of just binarizing the probabilistic mask?

As always, thanks for your help.

Krista

1 Like

Hi,
You probably want ot binarize the brain mask: My understanding is that you want to extract time courses from all voxels that have a certain level of grey matter density ?
binary_mask = nilearn.image.math_img('i1 > threshold', i1=my_probabilistic_mask)
where threshold is picked by you, as a number between 0 and 1 I guess.
Best,

1 Like

Thank you both for the question and answer!!! I’ve been fighting through nilearn documentation for hours. With your help,

from nilearn import image
from templateflow import api as tflow
from nilearn.maskers import NiftiMasker

mni_gm = tflow.get('MNI152NLin2009cAsym', desc=None, label='GM', resolution=2, suffix='probseg', extension='nii.gz')
gray_matter_binary = image.math_img('i1 > 0.50', i1=image.load_img(mni_gm))
masker = NiftiMasker().fit(gray_matter_binary)
masker.generate_report()

and, after trying all sorts of Maskers and failing, the report finally looks like reasonable gray matter mask. :slight_smile:

Sounds right. Could you tell us a bit more what was the difficulty for you, so that we can improve the documentation ?
Best,
Bertrand

@bthirion This Q&A is probably the wrong forum for this, and I don’t have a clear answer to your question. But I’ll leave a couple of thoughts anyway, with gratitude for every nilearn contributor.

While nilearn is exactly what I need to do neuroimaging research in python, I don’t yet find it intuitive. This is probably more about my background learning to code procedurally in C and bash than it is any problem with nilearn. I think in sequential algorithmic steps and nilearn seems to be deal with abstractions a layer or two higher than I’d like. In the example above, the FSL approach makes sense to me; fslmaths epi4d.nii -mul binary_graymask3d.nii masked_epi.nii is intuitive. It’s clear to me what data live in each data file and what fslmaths is doing. I imagine a ‘masker’ in nilearn as a mask object, containing a mask image, that can apply itself. But my intuition is wrong. The concept of a ‘masker’ object that doesn’t even necessarily contain a mask is still confusing to me. I imagine more exposure and experience will resolve this.

It also seems like nilearn is able to do a lot of really cool analyses with a few lines of code, which is great and powerful. But when I try to do something simple, like load up a pre-existing gray matter mask from templateflow, it seems like it should be a ‘masker’ thing to do, but none of the ‘masker’ examples covers it. This may be because I don’t actually need to load the templateflow mask, and perhaps nilearn maskers will just automagically do whatever masking is necessary behind the scenes if I know how to use them. Or maybe I only needed to load the binary mask as a NiftiImage, ignoring maskers altogether. I don’t know yet, so I’m trying and failing both ways, and coming across these posts that give me an insight I was missing before.

Thanks for asking, and feel free to DM me if I can do anything at all to contribute or make things better.

Thx for your input. I think it contains valuable information to improve docstrings and examples.
Let me add a few remarks:

  • fslmaths epi4d.nii -mul binary_graymask3d.nii masked_epi.nii is not a great pattern imho because it creates redundant data on your disk. Having methods that can handle operations in-memory leads to more efficient resource use. For the same reason we advocate storing unsmoothed data but smoothing them whenever this is necessary for your analysis.
  • The most meaningful way to interact with a new image/atlas is to use plotting functions, such as plot_anat, that you can use to introspect, or overlay the mask image with a functional image you want to process. You can access some information on your template through nibabel but most of the time I don’t find this convenient
  • A masker (after calling the fit() method) always has a mask attribute. It is called mask_ when estimated from data.
    Best,
    Bertrand