Brain parcellation using nilearn's NiftiLabelsMasker

I am using nilearn’s NiftiLabelsMasker for brain parcellation. First I define a masker object as nilearn.input_data.NiftiLabelsMasker and yeo_17 brain atlas as labels_img. After that I perform masker.fit_transform(func_img,confounds) to get the time series.

The shape of the functional nii file and the the yeo_17 nii file do not match. Do I need to first scale the atlas file to the shape of functional file or does nilearn does it internally?

Nilearn will do it. you can choose whether the image is resampled to the atlas resolution or vice-versa with the resampling_target parameter. From the NiftiLabelsMasker docstring:

    resampling_target: {"data", "labels", None}, optional.
        Gives which image gives the final shape/size. For example, if
        `resampling_target` is "data", the atlas is resampled to the
        shape of the data if needed. If it is "labels" then mask_img
        and images provided to fit() are resampled to the shape and
        affine of maps_img. "None" means no resampling: if shapes and
        affines do not match, a ValueError is raised. Defaults to "data".

Okay. That answers my question. Thanks a lot.

Hello,

I have similar problem. I have an atlas with 466 labels. When I am using NifliLabelmasker, I end up with 462 ROI instead of 466.
I am trying to understand how I am “losing” those 4 ROIs and which ROIs are lost.
I tried the resampling_target=“labels” but It doesn’t work and I still end up with 462 ROI.

Any guess of how to solve this problem ?

Thanks a lot !

Hi, can you share the atlas image?

Hi Jerome,

Actually it is a fusion of masks. I send it to you by email (can’t upload images on Neurostar I think ? - I realize that we are from the same lab !). I used this mask already with another dataset and it worked fine. I think that it might be related to my image. There is a drop in the signal in the orbitofrontal cortex.

I suspect that it might be related to this. I wanted to know if there is a way to “force” nilearn to inlcude all the labels from the atlas.

Thanks a lot,

Best,

Charles

it seems the atlas contains some empty regions (regions that have 0 voxels assigned to them):

from nilearn import image

atlas = image.load_img("Atlas_ting_in_HNU_mul_brainmask_98coverage.nii.gz")
atlas_values = set(image.get_data(atlas).astype(int).ravel())
print(
    f"atlas contains {len(atlas_values)} "
    "non-empty regions (including background = 0)"
)
print(
    "empty regions:",
    set(range(max(atlas_values) + 1)).difference(atlas_values),
)

prints

atlas contains 463 non-empty regions (including background = 0)
empty regions: {456, 441, 457, 461}

the label masker won’t assign a dimension to empty regions in the atlas image, so the output dimension is 462 (number of non-empty regions excluding the background)

after transforming an image, the regions corresponding to dimensions in the masked image can be looked up in masker.labels_: it looks like [1, ..., 440, 442, ... ,455, 458, ..., 466]

1 Like

there is no way to force nilearn to keep these empty regions. Note they couldn’t be assigned a meaningful value anyway – the masker computes the mean within each region, but the mean across 0 voxels is not defined

1 Like

Hi there, I’m facing a similar issue, but the atlas does not seem to contain empty regions. For context, same data with a different mask did not give an issue, and same data with this specific mask only gave issues (missing rois) with some subjects.

You can find the yeo_atlas, the nifti file for which the atlas ‘works’, and the one for which it ‘doesn’t work’ on this link

Any ideas are appreciated!