Conducting and visualizing GLM on atlas ROIs rather than voxels with nilearn

Summary of what happened:

I’m new to nilearn, so I’m struggling with finding the right documentation for doing this. I’d like to perform a typical first-level GLM analysis with a specified design matrix, but instead of fitting the model at the voxel level, I’d like to find results for the average signal in each ROI of an atlas. Ideally, I could then visualize the whole brain stat map as ROIs.

Command used (and if a helper script was used, a link to the helper script or the command generated):

So far I’ve got the atlas imported and a NiftiLabelsMasker object created with it that seems to be the right shape when I transform a nifti image with it. However, when I try to use that masker object as the ‘mask_img’ argument in a FirstLevelModel, I get an error about incompatible data. Am I misunderstanding what kind of masker is possible to pass to the GLM function, or is there something I should be doing to my data nifti first?

Thanks for your help in advance.

destrieux = nilearn.datasets.fetch_atlas_destrieux_2009()
masker = nilearn.maskers.NiftiLabelsMasker(labels_img=destrieux.maps, labels=destrieux.labels, standardize=True)
masker = masker.fit()
masker_firstlevel_glm = nilearn.glm.first_level.FirstLevelModel(mask_img=masker)
masker_firstlevel_glm = masker_firstlevel_glm.fit(smooth_brain, design_matrices=dm)

Version:

nilearn 0.11.1

Relevant log outputs (up to 20 lines):

TypeError                                 Traceback (most recent call last)
Cell In[302], line 2
      1 masker_firstlevel_glm = niglm.first_level.FirstLevelModel(mask_img=masker)
----> 2 masker_firstlevel_glm = masker_firstlevel_glm.fit(smooth_brain, design_matrices=dm)

File /opt/miniconda3/lib/python3.12/site-packages/nilearn/glm/first_level/first_level.py:884, in FirstLevelModel.fit(self, run_imgs, events, confounds, sample_masks, design_matrices, bins)
    881 # Initialize masker_ to None such that attribute exists
    882 self.masker_ = None
--> 884 self._prepare_mask(run_imgs[0])
    886 self.design_matrices_ = self._create_all_designs(
    887     run_imgs, events, confounds, design_matrices
    888 )
    890 # For each run fit the model and keep only the regression results.

File /opt/miniconda3/lib/python3.12/site-packages/nilearn/glm/first_level/first_level.py:1143, in FirstLevelModel._prepare_mask(self, run_img)
   1128 elif not isinstance(
   1129     self.mask_img, (NiftiMasker, SurfaceMasker, SurfaceImage)
   1130 ):
   1131     self.masker_ = NiftiMasker(
   1132         mask_img=self.mask_img,
   1133         smoothing_fwhm=self.smoothing_fwhm,
   (...)   1141         memory_level=self.memory_level,
   1142     )
-> 1143     self.masker_.fit(run_img)
   1145 else:
   1146     # Make sure masker has been fitted otherwise no attribute mask_img_
   1147     self.mask_img._check_fitted()

File /opt/miniconda3/lib/python3.12/site-packages/nilearn/maskers/nifti_masker.py:479, in NiftiMasker.fit(self, imgs, y)
    475     self.mask_img_ = self._cache(compute_mask, ignore=["verbose"])(
    476         imgs, verbose=max(0, self.verbose - 1), **mask_args
    477     )
    478 else:
--> 479     self.mask_img_ = _utils.check_niimg_3d(self.mask_img)
    481 if self.reports:  # save inputs for reporting
    482     self._reporting_data = {
    483         "mask": self.mask_img_,
    484         "dim": None,
    485         "images": imgs,
    486     }

File /opt/miniconda3/lib/python3.12/site-packages/nilearn/_utils/niimg_conversions.py:372, in check_niimg_3d(niimg, dtype)
    338 def check_niimg_3d(niimg, dtype=None):
    339     """Check that niimg is a proper 3D niimg-like object and load it.
    340 
    341     Parameters
   (...)    370 
    371     """
--> 372     return check_niimg(niimg, ensure_ndim=3, dtype=dtype)

File /opt/miniconda3/lib/python3.12/site-packages/nilearn/_utils/niimg_conversions.py:317, in check_niimg(niimg, ensure_ndim, atleast_4d, dtype, return_iterator, wildcards)
    312     return ni.image.concat_imgs(
    313         niimg, ensure_ndim=ensure_ndim, dtype=dtype
    314     )
    316 # Otherwise, it should be a filename or a SpatialImage, we load it
--> 317 niimg = load_niimg(niimg, dtype=dtype)
    319 if ensure_ndim == 3 and len(niimg.shape) == 4 and niimg.shape[3] == 1:
    320     # "squeeze" the image.
    321     data = safe_get_data(niimg)

File /opt/miniconda3/lib/python3.12/site-packages/nilearn/_utils/niimg.py:126, in load_niimg(niimg, dtype)
    124     niimg = load(niimg)
    125 elif not isinstance(niimg, spatialimages.SpatialImage):
--> 126     raise TypeError(
    127         "Data given cannot be loaded because it is"
    128         " not compatible with nibabel format:\n"
    129         + repr_niimgs(niimg, shorten=True)
    130     )
    132 dtype = _get_target_dtype(_get_data(niimg).dtype, dtype)
    134 if dtype is not None:
    135     # Copyheader and set dtype in header if header exists

TypeError: Data given cannot be loaded because it is not compatible with nibabel format:
destrieux2009_rois...

Hi @smburns47,

Nilearn GLMs do not accept labels maskers. If you choose this approach, the best bet would be to run the GLMs in another statistics modeling package, and then update the label image intensities with the resultant coefficients you wish to plot.

Best,
Steven

Ah bummer. Thanks for the quick answer at least!

Possible solution.

Use the NiftiLabelMasker on the output of the GLM.

The following code is a marimo notebook.

To view it

pip install marimo
marimo edit --sandbox path_to_where_you_saved_it.py 
# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "marimo",
#     "matplotlib==3.10.1",
#     "nilearn==0.11.1",
#     "numpy==2.2.4",
#     "pandas==2.2.3",
# ]
# ///

import marimo

__generated_with = "0.12.0"
app = marimo.App(width="full")


@app.cell
def _():
    import pandas as pd
    from nilearn.datasets import fetch_localizer_first_level
    from nilearn.image import mean_img

    data = fetch_localizer_first_level()

    t_r = 2.4

    fmri_img = data.epi_img

    mean_image = mean_img(fmri_img, copy_header=True)

    events = pd.read_table(data.events)
    events
    return (
        data,
        events,
        fetch_localizer_first_level,
        fmri_img,
        mean_image,
        mean_img,
        pd,
        t_r,
    )


@app.cell
def _(events, fmri_img, t_r):
    from nilearn.glm.first_level import FirstLevelModel

    flm = FirstLevelModel(
        t_r=t_r,
        noise_model="ar1",
        standardize=False,
        hrf_model="spm",
        drift_model="cosine",
        high_pass=0.01,
    )

    flm = flm.fit(fmri_img, events)
    return FirstLevelModel, flm


@app.cell
def _(flm):
    import numpy as np

    design_matrix = flm.design_matrices_[0]

    contrast_matrix = np.eye(design_matrix.shape[1])
    contrasts = {
        column: contrast_matrix[i]
        for i, column in enumerate(design_matrix.columns)
    }

    contrasts["audio"] = (
        contrasts["audio_left_hand_button_press"]
        + contrasts["audio_right_hand_button_press"]
        + contrasts["audio_computation"]
        + contrasts["sentence_listening"]
    )

    z_map = flm.compute_contrast(contrast_def=contrasts["audio"], output_type="z_score")
    return contrast_matrix, contrasts, design_matrix, np, z_map


@app.cell
def _(mean_image, z_map):
    from nilearn.plotting import plot_stat_map, show

    plotting_config = {
        "bg_img": mean_image,
        "display_mode": "ortho",
        "cut_coords": 3,
        "black_bg": True,
        "cut_coords" : [0, 0, 0]
    }
    plot_stat_map(
        z_map,
        title="listening > rest",
        **plotting_config,
    )
    show()
    return plot_stat_map, plotting_config, show


@app.cell
def _():
    return


@app.cell
def _(mo, z_map):
    from nilearn.datasets import fetch_atlas_destrieux_2009
    from nilearn.maskers import NiftiLabelsMasker

    destrieux = fetch_atlas_destrieux_2009()
    masker = NiftiLabelsMasker(labels_img=destrieux.maps, labels=destrieux.labels, standardize=True)
    signal = masker.fit_transform(z_map)
    z_map_averaged = masker.inverse_transform(signal)

    mo.md("""
    ## Average results within ROI

    Average effects within ROIs with NiftiLabelsMasker.
    Then inverse trasnform back to an image.

    """)
    return (
        NiftiLabelsMasker,
        destrieux,
        fetch_atlas_destrieux_2009,
        masker,
        signal,
        z_map_averaged,
    )


@app.cell
def _():
    return


@app.cell
def _(plot_stat_map, plotting_config, show, z_map_averaged):
    plot_stat_map(
        z_map_averaged,
        title="listening > rest (averaged over ROIs)",
        **plotting_config,
    )
    show()
    return


@app.cell
def _():
    import marimo as mo
    return (mo,)


if __name__ == "__main__":
    app.run()

The important bit.

    from nilearn.datasets import fetch_atlas_destrieux_2009
    from nilearn.maskers import NiftiLabelsMasker

    destrieux = fetch_atlas_destrieux_2009()
    masker = NiftiLabelsMasker(labels_img=destrieux.maps, labels=destrieux.labels, standardize=True)
    signal = masker.fit_transform(z_map)
    z_map_averaged = masker.inverse_transform(signal)

Before averaging:

After averaging:

this is the quick and dirty approach

what would probably be better would be to ROI-average each volume of your image timeseries before fitting

@Steven
am I missing something obvious of why this would not be correct?

Using the label masker on the output of the glm is a bit different than what the user was asking, from what I can tell.

But, I suppose ROI averaging each volume before fitting a GLM would work. The caveats being that it would be very computationally redundant (as each voxel in a parcel would be running the same model) and built-in methods of multiple comparison correction would not be valid.

Also @smburns47 I see you are fitting the GLM on a variable called smooth_brain. I can’t see your full code so I am not positive on what it represents, but you should only parcellate unsmoothed data.

@Steven Yes you’re right, I had that object loaded in just during practice but I’m looking to do parcellation on unsmoothed data.

@Remi-Gau I think I’ve seen some papers taking that approach (averaging betas together in an ROI rather than averaged time series on which one beta is computed) but I’m wondering now if that has different implications on the variance structure for significance testing?

Playing around further yesterday I think(?) this is also a solution for parcellation first that stays within the nilearn ecosystem:

masker = nilearn.maskers.NiftiLabelsMasker(labels_img=destrieux.maps, standardize=True)
destrieux_rois = masker.fit_transform(preproc_brain)
roi_glm = nilearn.glm.OLSModel(dm)
regression_results = roi_glm.fit(destrieux_rois)
contrast_betas = regression_results.Tcontrast(vidaud_contrast).effect
roi_map = masker.inverse_transform(contrast_betas)
nilearn.plotting.plot_stat_map(roi_map)µ

Maybe I misunderstood then.

yup MC corrections are just useless here.