Question about nilearn searchlight output

Hi all,

I implemented nilearn searchlight code on my dataset for a whole-brain search light analyses and the output figures look weird. More specifically, the searchlight accuracy figure and f score figure look as if they were plotted off the correct axes. Here I attached the relevant code and the output figures:, y)

from nilearn import image
mean_fmri = image.mean_img(demo_img)

from nilearn.plotting import plot_stat_map, plot_img, show
searchlight_img = new_img_like(mean_fmri, searchlight.scores_)
plot_img(searchlight_img, bg_img=mean_fmri,
         title="Searchlight", display_mode="z", cut_coords=[-9],
         cmap='hot', threshold=.15, black_bg=True, colorbar=True)

Screen Shot 2021-03-01 at 5.25.59 PM

# F_score results
p_ma =, mask=np.logical_not(mask_img))
f_score_img = new_img_like(mean_fmri, p_ma)
plot_stat_map(f_score_img, mean_fmri,
              title="F-scores", display_mode="z",


Screen Shot 2021-03-01 at 5.26.22 PM

I also tried plotting searchlight_img data using nilearn plotting.view_img_on_surf and the output looks fine although there is a big region that showed successful classificaiton during my ROI-based MVPA but was not observed in the whole brain searchlight surf plot.

Any thoughts on what might go wrong?


it seems the searchlight scores were computed with images not in the same space as mean_fmri? you would need to create an image using the correct affine for the searchight scores (that you can get from the searchlight’s mask_img or process_mask_img), and then use resample_to_img to resample to the same affine as mean_fmri


This is very useful. Thank you, Jerome!

1 Like