@ejolly @bthirion Amazing thanks!
Re Neurosynth, I just pick the coordinates, download anatomical.nii.gz, and use it as a mask?
@ejolly @emdupre @bthirion So apparently I was looking at the wrong place and the actual data is:
āselected voxels from each subject, keeping voxels in 78 cortical ROIs, defined using the AAL brain atlas [Tzourio-Mazoyer et al., 2002], excluding the cerebellum and
white matterā ⦠āaverage of 29227 voxels per subjectā
Is there a way to map these 29227 voxels to Region (specifically striatum, vmPFC) , or revert back to MNI space? I see this atlas is supported by nilearn: Nilearn: Statistical Analysis for NeuroImaging in Python ā Machine learning for NeuroImaging , but I am not sure how it should be used for my goal
Thanks!
What do you want to have : a vector of 29227 strings giving you a regions label for each voxel ? What are you going to do with that ?
I really suggest that you spend time considering NiftiMasker/NiftiLabelsMasker objects and how these can be used to do the only meaningful tasks you have to do: extract signals per region and then map back region-level features to some brain volume.
See e.g.
https://nilearn.github.io/auto_examples/06_manipulating_images/plot_extract_regions_labels_image.html#sphx-glr-auto-examples-06-manipulating-images-plot-extract-regions-labels-image-py
https://nilearn.github.io/auto_examples/06_manipulating_images/plot_mask_computation.html#sphx-glr-auto-examples-06-manipulating-images-plot-mask-computation-py
Best,
Bertrand
@bthirion Sorry for being unclear. I have 6 subjects. for each subjects I have an array of 120 * ~29227. 120 is the number of TRās during the experiment so at each cell I have float of voxel activation at this TR. 29227 is mean number of voxels across subjects so it slightly differ between them.
I need to be able to do one of (ranked by order of preference):
- Have a mapping between a string of regionLabel to list of voxels. (for example, a dictionary between each of the 78 region labels (string) and range/list of voxels)
- Revert back from the AAL space to MNI space (with 238955 values) , but I suppose it canāt be done
- If both of the above are impossible, just get an array of 120 * V , when V are voxels retrieved using a specific anatomical mask I have (but that is relevant for MNI space)
- If all of the above are impossible, have an anatomical mask of striatum & vmpfc that is relevant to the AAL atlas
I read the links you attached and I think I understand all the masking issues, but I not sure it is relevant to my case since I start from a space that is post AAL atlas?
Anyway, I tried to run the code from here and got errors:
When using at atlas_aal inteas of atlas_yeo_2011:
atlas_aal = datasets.fetch_atlas_aal()
region_labels = connected_label_regions(atlas_aal)
I got (also didnāt help to install openpyxl):
code/venv/lib/python3.9/site-packages/nilearn/datasets/atlas.py", line 804, in fetch_atlas_aal for label in root.getiterator('label'): AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'getiterator'
And for the exact code from the link:
yeo_atlas = datasets.fetch_atlas_yeo_2011()
region_labels = connected_label_regions(yeo_atlas)
I get:
region_labels = connected_label_regions(yeo_atlas) File "code/venv/lib/python3.9/site-packages/nilearn/regions/region_extractor.py", line 489, in connected_label_regions labels_img = check_niimg_3d(labels_img) File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 338, in check_niimg_3d return check_niimg(niimg, ensure_ndim=3, dtype=dtype) File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 280, in check_niimg return concat_niimgs(niimg, ensure_ndim=ensure_ndim, dtype=dtype) File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 450, in concat_niimgs first_niimg = check_niimg(next(literator), ensure_ndim=ndim) File "code/venv/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py", line 271, in check_niimg raise ValueError("File not found: '%s'" % niimg) ValueError: File not found: 'description' Process finished with exit code 1
Any idea?
Thank you so much!
Most likely, atlas fetching did not complete.
There is no such thing as a canonical definition of MNI space with 238955 value. You are probably referring to some reference image, maybe the AAL atlas image ? Somehow you need to have an image defined in this space, with 29227 non-zero values, that defines the set of voxels you are considering.
Once you have that, it is rather easy to used a masker.
HTH,
Bertrand
@bthirion Thanks for your response.
What do you mean by atlas fetching did not complete? The Process was finished and return a Bunch
object. When I am trying to access the object itself directly, it does have a ādescriptionā attribute, which make the error pretty weird. (But anyway I am more interested in making the AAL fetching to workā¦)
print(yao_atlas.description)
returns:
b"Yeo 2011 Atlas\n\n\nNotes\n-----\nThis atlas provides a labeling of some cortical voxels in the MNI152\nspace.\n\nFour versions of the atlas are available, according to the cortical\nmodel (thick or thin cortical surface) and to the number of regions\nconsidered (7 or 17).\n\nContent\n-------\n :āanatā: Background anatomical image for reference and visualization\n :āthin_7ā: Cortical parcelation into 7 regions, thin cortical model\n :āthin_17ā: Cortical parcelation into 17 regions, thin cortical model\n :āthick_7ā: Cortical parcelation into 17 regions, thick cortical model\n :āthick_17ā: Cortical parcelation into 17 regions, thick cortical model\n :ācolors_7ā: Text file for the coloring of 7-regions parcellation\n :ācolors_17ā: Text file for the coloring of 17-regions parcellation\n\n\nReferences\n----------\nFor more information on this datasetās structure, see\nhttp://surfer.nmr.mgh.harvard.edu/fswiki/CorticalParcellation_Yeo2011\n\nYeo BT, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, Hollinshead M,\nRoffman JL, Smoller JW, Zollei L., Polimeni JR, Fischl B, Liu H,\nBuckner RL. The organization of the human cerebral cortex estimated by\nintrinsic functional connectivity. J Neurophysiol 106(3):1125-65, 2011.\n\nLicence: unknown.\n"
And yes, exactly - I need a mask defined in this space. Do you know how or where I can find one?
@o_aug concerning the errors that you get:
dataset fetchers return a Bunch object as you mentioned, and connected_label_regions
expects a Nifti-like image, so you need to do something like:
from nilearn.datasets import fetch_atlas_yeo_2011
from nilearn.regions import connected_label_regions
yeo = fetch_atlas_yeo_2011()
regions_labels = connected_label_regions(yeo['thin_7']) # for example...
Same for aal:
from nilearn.datasets import fetch_atlas_aal
from nilearn.regions import connected_label_regions
aal = fetch_atlas_aal()
regions_labels = connected_label_regions(aal['maps'])
Hope this helps!
Regarding the mask, you need to know how the 29227 voxels have been selectedā¦
Best,
Bertrand
Hi @o_aug ā
Sorry for coming in late, but I just want to confirm : where are you looking that you see this defined ? Is there a particular nltools
example youāre referring to ? Itās hard to pick up in the middle of an analysis, so weād really need additional context. As bthirion notes, weād need to know how the voxels have been selected, which would mean knowing the original example you were looking at to generate these data vectors.
Elizabeth
@NicolasGensollen It does work for yeo (I get a Nifty1Image object) but still get the same error for the line
aal = fetch_atlas_aal()
aal = fetch_atlas_aal()
File "/venv/lib/python3.9/site-packages/nilearn/datasets/atlas.py", line 804, in fetch_atlas_aal
for label in root.getiterator('label'):
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'getiterator'
@emdupre @bthirion Sorry for being unclear. I am looking at a shared dataset. The voxels where selected by selecting voxels from each subjects, keeping voxels in the 78 cortical ROIs defined in the AAL atlas, excluding cerebellum and white matter. Does this help?
@emdupre @bthirion Sorry for being unclear. I am looking at a shared dataset. The voxels where selected by selecting voxels from each subjects, keeping voxels in the 78 cortical ROIs defined in the AAL atlas, excluding cerebellum and white matter. Does this help?
Thanks, @o_aug ! I think what Iām missing is how these voxels were extracted. Do you have the original code that was used ? Or: how are the data saved when you started working with them ? Are they in nifti format, or something else ?
@emdupre Oh, so that is probably the cause of the misunderstanding - I thought that probably this 78 regions extracted are with standard mask or package. The data are just .npy 2d array with each row represents a TR.
Unfortunately, I donāt have the original code. I will try to see if I can get an access or something similar.
Thank you very much for the help!
(If you have any idea re the error in fetch_atlas_aal
it will still be great )
Unfortunately, I donāt have the original code. I will try to see if I can get an access or something similar.
Thank you very much for the help!
Let us know what you find, @o_aug !
(If you have any idea re the error in
fetch_atlas_aal
it will still be great)
I think this might be because youāre running with Python3.9. Specifically, looking at the error message:
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'getiterator'
This is a known deprecation in Python 3.9. I think weād need to open an issue to change these lines (and maybe add in Python 3.9 testing to our build matrix !?).
WDYT @NicolasGensollen ?
Good catch @emdupre ! I can indeed replicate with Python 3.9.
Iāll open an issue and work on a fix tomorrow.
Youāre also right about adding Python 3.9 to the tests, I should have added it already !
@emdupre @NicolasGensollen I was able to get a mapping of each voxel to (x,y,z) coords, and a 4X4 matrix of doubles, that to my understanding should map this to MNI space.
I found this function that looks relevant:
https://nilearn.github.io/modules/generated/nilearn.image.coord_transform.html
IIUC, I should convert each voxels to (x,y,z), and then use this matrix and function to get coords in MNI space.
But, can you please explain how the matrix works? I would hate to use something I donāt understandā¦
Re AAL atlas error: by downgrading from Python3.9 I was able to fetch AAL atlas, but I have a question about using it: Letās say I want to use a specific region label (e.g. Frontal_Mid_L) as a map and plot it on a brain image, how can it be done?
I looked over the examples and docs of nilean and didnāt find a way to plot specific labels.
I suppose I should use plot_img
or plot_roi
but I couldnāt find how to fetch the image of the specific region from the atlas (after taking the nifty file using [āmapsā]).
Thanks you very much
But, can you please explain how the matrix works? I would hate to use something I donāt understandā¦
If I understand correctly, youāre referring to an affine matrix ? In which case, Iād strongly recommend this excellent explainer from the Nibabel docs: Neuroimaging in Python ā NiBabel 3.2.0 documentation
Letās say I want to use a specific region label (e.g. Frontal_Mid_L) as a map and plot it on a brain image, how can it be done?
You should be able to access the label, which will tell you the associated value in the niimg (each label is paired with its associated map, in increasing order). As you suggested, you could then use plot_roi
to visualize it and confirm its location !
On the second point, hereās some example code:
import numpy as np
import nibabel as nib
from nilearn import datasets, plotting
from nilearn.image import math_img
aal = datasets.fetch_atlas_aal()
assert aal.labels[6] == 'Frontal_Mid_L'
# fetch valid ROI values:
data = nib.load(aal.maps).get_fdata()
np.unique(data)
# grab sixth ROI, discarding background
roi = math_img("img == 2112", img=aal.maps)
plotting.plot_roi(roi)
@emdupre Thanks, it works and I do get the ROI, can you please explain why did you choose img == 2112
for the sixth ROI ? Letās say I now want the seventh and the 31st ROI, what should be passed instead of 2112
?
If you run the code snippet you can see the full list of valid values in data
; 2112 simply corresponds to the 6th item in the list after discarding the initial 0 entry (the value for the background). So youād just subset the np.unique
output appropriately:
import numpy as np
import nibabel as nib
from nilearn import datasets
aal = datasets.fetch_atlas_aal()
data = nib.load(aal.maps).get_fdata()
values = np.unique(data)
values = values[1:] # drop 0; ie. discard background
for label, roi in zip(aal.labels, values):
print(f'{label} has a value of {roi}')
Great, itās working, 10x!