Mapping Voxel number to Brain region

When using Nilearn Brain Data (e.g. in this example):

from bids import BIDSLayout
from nltools.data import Brain_Data
from nltools.utils import get_anatomical
layout = BIDSLayout(data_dir, derivatives=True)
data = Brain_Data(layout.get(target='subject', scope='derivatives', suffix='bold', extension='nii.gz', return_type='file')[0])
print(data.shape())

I get that the shape is (128, 238955)
So I have 128 TR’s, 238955 voxels and each point is activity in the voxel at certain TR right?
My question is , how I can map the voxel to a specific brain area?
Meaning, that is the relevant region for voxel #20? Or how to get all Striatum voxels?

1 Like

Hi, you can use a NiftiMasker from nilearn, providing it the atlas you want to use, and then get the data matrix only for the ROI that you are interested in.
https://nilearn.github.io/auto_examples/06_manipulating_images/plot_mask_computation.html#sphx-glr-auto-examples-06-manipulating-images-plot-mask-computation-py
Best,
Bertrand

@bthirion Sorry, perhaps I was unclear.
I understand that the 2d array is by applying MNI mask on the 4d data.
But, after applying it, I get 238955 voxels.
How can I translate these 1d voxels to 3d coords in the MNI space?
What is the MNI <x,y,z> location of specific voxel?

Hi @o_aug !

I think one potential point of confusion here is that the code snippet you linked isn’t actually Nilearn, but NLtools from the COSAN lab.

I’m not as familiar with NLtools, so @ejolly would be a better expert there. But in Nilearn, you could use NiftiMasker with a given ROI as @bthirion recommended. Or, if you have an atlas you could use NiftiLabelsMasker as in this example, where you would be aggregating over all voxels in each ROI. Note that if you wanted the raw voxels values themselves instead of an aggregate, NiftiLabelsMasker does not (yet) support returning those. Instead, you could use NiftiMasker and iterate over the given ROIs of interest from the atlas.

HTH,

Elizabeth

EDIT: If you want to project back to the volume to have x, y, z coordinates, you can just use the inverse_transform method which will put the matrix back in volumetric space.

For this, you can simply get the array corrdinates of the voxels (their integer index) and ransform it to MNI coordinates using the coord_transform function from nilearn.image
https://nilearn.github.io/modules/generated/nilearn.image.coord_transform.html

Yer, it don’t think that you really want to generate a set of 238955 MNI coordinates… so as put by @emdupre you may want to rethink what you actually need.
Best,
Bertrand

hey @o_aug

Yup as @emdupre mentioned the code that you’re referencing is from the nltools library. A lot of our design-thinking in this library was specifically setup to make it a bit easier, intuitive, and faster to work with great libraries like nilearn, nibabel, pandas, and numpy together. Under-the-hood we’ve composed several existing classes and objects from these libraries together into single classes with added functionality, while still exposing all of the functionality of the original library classes.

I’d recommend checking out our docs and the overview video of the library design for more.

In particular the Brain_Data class is an object that’s composed of several other objects like:

  • a pandas.DataFrame (Brain_Data.X or Brain_Data.Y)
  • numpy array (Brain_Data.data)
  • a NiftiMasker from nilearn (Brain_Data.mask or Brain_Data.to_nifti())

In particular, on initialization with some input, Brain_Data automatically tries to map and internally stores the mapping between 3d or 4d x,y,z, (time) brain volumes and a 2d numpy array. This is really handy for a lot of reasons. For example, you can arbitrarily cast any 2d numpy array (of the same shape) to voxel-space in a single line of code, just by replacing the .data attribute, e.g.

# Replace a brain data at all with normally distributed noise at each voxel
brain.data = np.random.normal(size=brain.shape())

# Plot the noise
brain.plot() 

nifti_obj = brain.data.to_nifti() # this is just a nibabel.Nifti1Image!

nifti_obj.get_data() # use the nibabel method to get the 3d or 4d data representation

If you have a Nifti file of ROI masks, atlases, etc you can always subset the data in our toolbox using the Brain_Data.apply_mask() or Brain_Data.extract_roi methods, among others or even by modifying .data directly, as in my example above.

We also wrap nilearn.plotting functions including view_img with some interactive widget features for jupyter notebooks (i.e. thresholding, slice selection, etc), using Brain_Data.iplot()

Hope that clarifies things a bit.

1 Like

@emdupre @bthirion @ejolly
Thanks for the very elaborated answers! (And what a useful package!)

I understand now that the masker transforms from the 4d space to the 2d space of all voxels that falls within the boundaries of the provided mask.
I actually received an existing dataset that is already 2d numpy array with the shape of 150 X 238955. (150 TRs)
However, as I written I only want to fetch voxels that fall within specific region (Striatum).
So IIUC, I first need to create Brain_Data object out of it.
Then, I can use inversve_transform on my Brain_Data to bring it back to 4d, and apply striatum mask but probably the best will be to run Brain_Data.apply_mask() on the 150 X 238955 dataset with a striatum mask?
Should I use the same mask in either case or the mask to apply on Brain_Data object is calibrated differently?
Do you know where can I find per-region masks (from MNI space)?

@o_aug

It sounds like the dataset your received is approximately in MNI152 2mm space (it has the right number of voxels). If you don’t have access to the original 4d nifti file of the data and only the 2d array, then, you’ll still need some mapper to convert between the data you have and 4d voxel space. If you’re confident the data you received is already in MNI152 2mm space, then you can use the included brain and anatomical masks in our toolbox (see nltools.utils.get_resource_path()).

I would load one of our 2mm masks simply to get this mapping and then replace the data with your data as in my previous example. Note this only works if you’re comfortable assuming the data you have definitely already aligned to MNI152 2mm space. Ideally you’d have access to the original 4d nifti file because Brain_Data will automatically fit the nilearn mapper for you and store it in Brain_Data.nifti_masker.

from nltools.data import Brain_Data
from nltools.utils import get_anatomical # or get_resource_path and pick which one

# Pass in the MNI 2mm template file path to "learn a nifti_masker" that nltools can invert for you
# only need to do this because you don't have access to the original nifti file and therefore have no
# way of knowing how to take an arbitrary 2d numpy array and transform it to 4d space

# Use nltools included MNI 2mm just to learn this mapper
# this will have nifti_masker that's already fit; see more details below
brain = Brain_Data(get_anatomical()) 
brain.data = your_numpy_array

# For convenience so you don't have to repeat the above you can save your data
# to nifti format. This also means you can work with it in other tools/viewers that understand
# nifti file formats
brain.write('your_data.nii.gz')

# Then in the future you can just do
brain = Brain_Data('your_data.nii.gz')

# More details, to illustrate why this is needed
# An empty initialized brain has not made a call to nifti_masker fit
# You can check this in the same way nilearn does
empty_brain = Brain_Data()
empty_brain.nifti_masker.mask_img_# AttributeError

# Your data
brain.nifti_masker.mask_img_# No error, because nltools called .fit under-the-hood for you

Now if you have striatal mask already in MNI152 2mm space, i.e. a nifti file with 0s in non-striatal voxels and 1s in striatal voxels, you can simply pass the file path to Brain_Data and use apply_mask. Otherwise if you have a 2d array of the striatal mask and no access to the original 3d nifti file, you’ll need to do exactly the same thing as above to first “learn” a mapping.

# EITHER
mask = Brain_Data('path_to_striatal_mask.nii.gz')
# OR
mask = Brain_Data(get_anatomical())
mask.data = striatal_mask_2d_array

# Mask it
brain.apply_mask(mask)

As far as obtaining a striatal mask you have lots of options. A few quick ones might be:

  • seeing if there’s a striatal parcel in one of nilearn’s included parcellation datasets
  • searching and using a mask from the NeuroVault public repository. nltools support directly loading data from Neurovault given a url
  • Searching and downloading a nifti file containing a mask from Neurosynth

Hope that helps

Regarding the CS definition, you may want to look at atlases available in nilearn.datasets, such as Harvard Howford or Talairach or DiFuMo (Ventral striatum RH (DiFuMo-1024))
HTH,
Bertrand

@ejolly @bthirion Amazing thanks!
Re Neurosynth, I just pick the coordinates, download anatomical.nii.gz, and use it as a mask?

@ejolly @emdupre @bthirion So apparently I was looking at the wrong place and the actual data is:
“selected voxels from each subject, keeping voxels in 78 cortical ROIs, defined using the AAL brain atlas [Tzourio-Mazoyer et al., 2002], excluding the cerebellum and
white matter” … “average of 29227 voxels per subject”
Is there a way to map these 29227 voxels to Region (specifically striatum, vmPFC) , or revert back to MNI space? I see this atlas is supported by nilearn: Nilearn: Statistical Analysis for NeuroImaging in Python — Machine learning for NeuroImaging , but I am not sure how it should be used for my goal
Thanks!

What do you want to have : a vector of 29227 strings giving you a regions label for each voxel ? What are you going to do with that ?
I really suggest that you spend time considering NiftiMasker/NiftiLabelsMasker objects and how these can be used to do the only meaningful tasks you have to do: extract signals per region and then map back region-level features to some brain volume.
See e.g.
https://nilearn.github.io/auto_examples/06_manipulating_images/plot_extract_regions_labels_image.html#sphx-glr-auto-examples-06-manipulating-images-plot-extract-regions-labels-image-py
https://nilearn.github.io/auto_examples/06_manipulating_images/plot_mask_computation.html#sphx-glr-auto-examples-06-manipulating-images-plot-mask-computation-py
Best,
Bertrand

@bthirion Sorry for being unclear. I have 6 subjects. for each subjects I have an array of 120 * ~29227. 120 is the number of TR’s during the experiment so at each cell I have float of voxel activation at this TR. 29227 is mean number of voxels across subjects so it slightly differ between them.

I need to be able to do one of (ranked by order of preference):

  1. Have a mapping between a string of regionLabel to list of voxels. (for example, a dictionary between each of the 78 region labels (string) and range/list of voxels)
  2. Revert back from the AAL space to MNI space (with 238955 values) , but I suppose it can’t be done
  3. If both of the above are impossible, just get an array of 120 * V , when V are voxels retrieved using a specific anatomical mask I have (but that is relevant for MNI space)
  4. If all of the above are impossible, have an anatomical mask of striatum & vmpfc that is relevant to the AAL atlas

I read the links you attached and I think I understand all the masking issues, but I not sure it is relevant to my case since I start from a space that is post AAL atlas?
Anyway, I tried to run the code from here and got errors:
When using at atlas_aal inteas of atlas_yeo_2011:

atlas_aal = datasets.fetch_atlas_aal()
region_labels = connected_label_regions(atlas_aal)

I got (also didn’t help to install openpyxl):

  code/venv/lib/python3.9/site-packages/nilearn/datasets/atlas.py", line 804, in fetch_atlas_aal
    for label in root.getiterator('label'):
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'getiterator'

And for the exact code from the link:

yeo_atlas = datasets.fetch_atlas_yeo_2011()
region_labels = connected_label_regions(yeo_atlas)

I get:

region_labels = connected_label_regions(yeo_atlas)
  File "code/venv/lib/python3.9/site-packages/nilearn/regions/region_extractor.py", line 489, in connected_label_regions
labels_img = check_niimg_3d(labels_img)
  File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 338, in check_niimg_3d
return check_niimg(niimg, ensure_ndim=3, dtype=dtype)
  File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 280, in check_niimg
return concat_niimgs(niimg, ensure_ndim=ensure_ndim, dtype=dtype)
  File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 450, in concat_niimgs
first_niimg = check_niimg(next(literator), ensure_ndim=ndim)
  File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 271, in check_niimg
raise ValueError("File not found: '%s'" % niimg)
ValueError: File not found: 'description'

Process finished with exit code 1

Any idea?
Thank you so much!

Most likely, atlas fetching did not complete.

There is no such thing as a canonical definition of MNI space with 238955 value. You are probably referring to some reference image, maybe the AAL atlas image ? Somehow you need to have an image defined in this space, with 29227 non-zero values, that defines the set of voxels you are considering.
Once you have that, it is rather easy to used a masker.

HTH,
Bertrand

@bthirion Thanks for your response.
What do you mean by atlas fetching did not complete? The Process was finished and return a Bunch object. When I am trying to access the object itself directly, it does have a ‘description’ attribute, which make the error pretty weird. (But anyway I am more interested in making the AAL fetching to work…)

print(yao_atlas.description)

returns:

b"Yeo 2011 Atlas\n\n\nNotes\n-----\nThis atlas provides a labeling of some cortical voxels in the MNI152\nspace.\n\nFour versions of the atlas are available, according to the cortical\nmodel (thick or thin cortical surface) and to the number of regions\nconsidered (7 or 17).\n\nContent\n-------\n :‘anat’: Background anatomical image for reference and visualization\n :‘thin_7’: Cortical parcelation into 7 regions, thin cortical model\n :‘thin_17’: Cortical parcelation into 17 regions, thin cortical model\n :‘thick_7’: Cortical parcelation into 17 regions, thick cortical model\n :‘thick_17’: Cortical parcelation into 17 regions, thick cortical model\n :‘colors_7’: Text file for the coloring of 7-regions parcellation\n :‘colors_17’: Text file for the coloring of 17-regions parcellation\n\n\nReferences\n----------\nFor more information on this dataset’s structure, see\nhttp://surfer.nmr.mgh.harvard.edu/fswiki/CorticalParcellation_Yeo2011\n\nYeo BT, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, Hollinshead M,\nRoffman JL, Smoller JW, Zollei L., Polimeni JR, Fischl B, Liu H,\nBuckner RL. The organization of the human cerebral cortex estimated by\nintrinsic functional connectivity. J Neurophysiol 106(3):1125-65, 2011.\n\nLicence: unknown.\n"

And yes, exactly - I need a mask defined in this space. Do you know how or where I can find one?

@o_aug concerning the errors that you get:

dataset fetchers return a Bunch object as you mentioned, and connected_label_regions expects a Nifti-like image, so you need to do something like:

from nilearn.datasets import fetch_atlas_yeo_2011
from nilearn.regions import connected_label_regions
yeo = fetch_atlas_yeo_2011()
regions_labels = connected_label_regions(yeo['thin_7']) # for example...

Same for aal:

from nilearn.datasets import fetch_atlas_aal
from nilearn.regions import connected_label_regions
aal = fetch_atlas_aal()
regions_labels = connected_label_regions(aal['maps'])

Hope this helps!

Regarding the mask, you need to know how the 29227 voxels have been selected…
Best,
Bertrand

Hi @o_aug

Sorry for coming in late, but I just want to confirm : where are you looking that you see this defined ? Is there a particular nltools example you’re referring to ? It’s hard to pick up in the middle of an analysis, so we’d really need additional context. As bthirion notes, we’d need to know how the voxels have been selected, which would mean knowing the original example you were looking at to generate these data vectors.

Elizabeth

@NicolasGensollen It does work for yeo (I get a Nifty1Image object) but still get the same error for the line

aal = fetch_atlas_aal()

aal = fetch_atlas_aal()
  File "/venv/lib/python3.9/site-packages/nilearn/datasets/atlas.py", line 804, in fetch_atlas_aal
    for label in root.getiterator('label'):
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'getiterator'

@emdupre @bthirion Sorry for being unclear. I am looking at a shared dataset. The voxels where selected by selecting voxels from each subjects, keeping voxels in the 78 cortical ROIs defined in the AAL atlas, excluding cerebellum and white matter. Does this help?

@emdupre @bthirion Sorry for being unclear. I am looking at a shared dataset. The voxels where selected by selecting voxels from each subjects, keeping voxels in the 78 cortical ROIs defined in the AAL atlas, excluding cerebellum and white matter. Does this help?

Thanks, @o_aug ! I think what I’m missing is how these voxels were extracted. Do you have the original code that was used ? Or: how are the data saved when you started working with them ? Are they in nifti format, or something else ?