Nilearn loading functions outputs NaN matrices (get_data, load_img)

Summary of what happened:

I’m Marco, a PhD from TU Dresden in Systems Neuroscience.

I’m trying to load some 4D nii files of preprocessed resting state data in order to extract the connectivity matrices. The files are pretty heavy (1.8 GB), since they’re float32 images in 2x2x2 mm with 500 volumes, but when I visualize them with MRIcroGL they look fine. When I use nilearn functions to load them, the 4D matrix has only NaN instead of numbers. The others parameters from the header are read normally. What could be the problem?

Command used (and if a helper script was used, a link to the helper script or the command generated):

img = nilearn.image.load_img(nii_path)


Python 3.10.8, nilearn 0.10.0

Environment (Docker, Singularity, custom installation):


Screenshots / relevant information:


Thank you for your help!

Hi @Marco_Bottino and welcome to Neurostars!

I have relabeled your post as Software Support and added the corresponding template. Please fill out the requested information by editing your post so we can best help you.


Hi @Marco_Bottino, the function image.load_img is built on nibabel functionality so my guess is that you have to call img.get_fdata() in order to load the data array into memory. By default nibabel image objects are loaded with an array proxy. See more info here: load_niimg

Thanks ymzayek!
I get the following error when I try to run img.get_fdata:

Moreover, what I really need is to run a masker, and that’s how I noticed this problem. The code I’m using is:

masker = NiftiLabelsMasker(labels_img=mask_nii, standardize=True)
ts = masker.fit_transform(nii_path).T

in order to get the time series, but I obtain the following warning:
...python3.10/site-packages/nilearn/_utils/ UserWarning: Non-finite values detected. These values will be replaced with zeros.
which I trace back to the matrix being loaded incorrectly

Update: when trying to load the same image in another resolution (2.4x2.4x2.4), which weighs 700MB instead of 1.8GB, the problem is gone, so I guess it’s a memory allocation problem. How can I deal with this?

Your problem seems dependent on the IDE you’re using. Are you running the code through vscode debugger? It seems you have to increase the timeout. I’d imagine that loading the data from your original nifti image would take more time. Can you see if your code runs as a python script from the terminal or run it using IPython?

It’s not obvious to me that something’s wrong. nans are valid values and warning that they’ll be replaced with zeros is also reasonable. Is the data cache 100% nans? np.isnan(img._data_cache).all()? A tool may choose to write nans where values are unknown, if they want to distinguish voxels having a value of 0 from voxels having no value.

img._data_cache is where nilearn caches the result of get_data(img), replicating the now defunct img.get_data() interface from nibabel. This result should simply be numpy.asanyarray(img.dataobj), which you can run yourself and verify that you get the same thing. That result is exactly img.dataobj.slope * img.dataobj.get_unscaled() + img.dataobj.inter, and may have a type ranging from img.get_data_dtype() to np.float64, depending on the values of the scaling parameters.

Hopefully this helps you track things down.