How to verify that resampling is not invalidating my decoder in nilearn?

I’m running nilearn’s decoder…

generic_mask_filepath = '.../spm12/canonical/MNI152_T1_1mm_brain_mask.nii'
decoder = Decoder(estimator = 'svc', mask = generic_mask_filepath, standardize= True)
prediction =, predict_var)

and receiving a resampling warning:

.conda/envs/neuralsignature/lib/python3.8/site-packages/nilearn/image/ RuntimeWarning: NaNs or infinite values are present in the data passed to resample. This is a bad thing as they make resampling ill-defined and much slower.
  _resample_one_img(data[all_img + ind], A, b, target_shape,

I worry this is because the mask I’m using and the fMRI data are not being aligned correctly. When I plot using the plotting function they appear to be misaligned:

mask_generic = image.load_img(generic_mask_filepath)


And alignment is something we need to be concerned about because the images don’t have the same affine:

[[   2.    0.    0.  -96.]
 [   0.    2.    0. -132.]
 [   0.    0.    2.  -78.]
 [   0.    0.    0.    1.]]
[[  -1.    0.    0.   90.]
 [   0.    1.    0. -126.]
 [   0.    0.    1.  -72.]
 [   0.    0.    0.    1.]]

However, when loading in FSLeyes, the two images appear to be correctly aligned (albeit the subject data looks a little more tightly trimmed than the standard image).

(as a new poster I can’t insert more than one media item in a post, so the FSLeyes screenshot is in a reply below instead)

Several questions:
(1) Is there an alignment problem during the decoding (I’m not worried about alignment in visualization; it just seems possibly diagnostic), as there is during the image viewing?
(2) If so, it the cause of the warning message I’m seeing?
(3) …and how could I address the alignment problem?
(4) If there’s no alignment problem during decoding, how can I be confident about that?

Here’s the fsleyes screenshot showing that fsleyes does correctly align the image and mask.

the (misaligned) anatomical image you see in the background of the first plot is the mni template, which is used by default as the background image. the mask and mean_img indeed don’t seem to be aligned with the mni template, but that’s not necessarily an issue.

you can try instead plotting your subj_data with bg_img=None, then the contours of the mask with add_contours as shown here for example to check alignment

1 Like

ahhh, my mistake, I used mask where had intended to use bg_img.

Running that again, as you suggest, they look pretty misaligned–surely this is a problem, or at the very least, demonstrates why we’re getting that resampling warning?



display = plotting.plot_anat(image.mean_img(subj_data),title='title',
                             cut_coords=[-34, -39, -9],threshold=0.01)
display.add_contours(mask_img,levels=[0.5], colors='b')


Indeed, they don’t seem aligned and as you say that is a problem and probably causes the NaNs that trigger the warning. I’m surprised that the same image and mask seem to be perfectly aligned in the second plot (fsleyes) you showed.

note that nilearn has some utilities (see here) to compute a mask that matches your data, and that the Decoder can use them automatically if you pass None or a NiftiMasker instead of an image as the mask parameter

1 Like

OK, thanks for the follow-up. This time I hadn’t pointed to the right mask file - the fMRI data is in MNI-space (as I want it to be) but the particular mask I was using in the most recent screenshot was in subject-space. I compared it to a subject-mask in MNI space and it works.

display = plotting.plot_anat(image.mean_img(subj_data),title='title',
                             cut_coords=[-34, -39, -9],threshold=0.01)
display.add_contours(subject_mni_mask_filepath,levels=[0.5], colors='b')


Sorry for the mistake and thanks for your comments!

1 Like