Nilearn second-level error img.slicer


I’m struggling to clear the following error coming up when I’m using second_level_model.compute_contrast


TypeError: Cannot slice image objects; consider using `img.slicer[slice]` to generate a sliced image (see documentation for caveats) or slicing image array data with `img.dataobj[slice]` or `img.get_fdata()[slice]

I’ve found that this was an issue with nilearn on GitHub repo but the issue seems to be solved in 2018 and closed. Does anyone know how to clear this? I use image.load_img to load the data.

Will appreciate greatly any help!


1 Like

What nilearn version are you using?
Could you provide the image and the code you’re using ?

Hi, the version is 0.8.1.

My code follows the tutorial on second-level one sample ttest example (from here)

The images are loaded using cmap_filenames = image.load_img('images/sub_*.nii')
design_matrix = pd.DataFrame([1] * n_samples, columns=['intercept'])
second_level_model = SecondLevelModel().fit(cmap_filenames, design_matrix=design_matrix)
z_map = second_level_model.compute_contrast(second_level_stat_type='t',output_type='z_score')

I know it may have something to do with how the images are formatted or loaded. I can’t really upload the image file here.

PS. just a disclaimer I’m a beginner with nilearn.


Hi @PuddleJumper

The reason you get this error is because, when you call load_img on the list of 3D images, it concatenates them into a single 4D image:

(53, 63, 46)
(53, 63, 46, 16)

SecondLevelModel expects as inputs, either a list of imgs, a list of FirstLevelModel, or a dataframe, as can be seen in its docstring:

and which is ensured here:

For some reason, passing a single image as second_level_input doesn’t raise an error in fit(), which seems like a bug to me (I’ll have to double check). So, when you call compute_contrast(), one of the first steps is to call _check_first_level_contratst here:

which then tries to take the first element of the list, which in your case is interpreted as slicing the image:

Here is the full code:

import pandas as pd
from nilearn.image import load_img
from nilearn.glm.second_level import SecondLevelModel
from nilearn.datasets import fetch_localizer_contrasts

n_samples = 16
data = fetch_localizer_contrasts(["left vs right button press"], n_samples,
# This will fail:
# cmap_filenames = load_img(data.cmaps)
# This will work:
cmap_filenames = data.cmaps
design_matrix = pd.DataFrame([1] * n_samples, columns=['intercept'])
second_level_model = SecondLevelModel().fit(cmap_filenames, design_matrix=design_matrix)
z_map = second_level_model.compute_contrast(output_type='z_score')

Let me know if this isn’t clear. In the meantime I’ll see whether we should raise an error when a single 4D image is passed, or if this indexing is a bug.


Thx @NicolasGensollen
To me this sounds like a bug. A 4D image should be equivalent to a list of 3D image ?

Thanks, I opened this issue in Nilearn to make sure we don’t forget to fix this one way or another.

Thanks a lot! I really helped, now I was able to identify issue!


1 Like