Hi, I wonder what exact coordinate system do surfaces fetched by nilearn.datasets.fetch_surf_fsaverage
use? Is it ICBM 2009c Nonlinear Asymmetric template?
Hi @David1, welcome to Neurostars!
The coordinates of the vertices changes depending on what kind of surface you look at: paid, inflated, etc… Did I understand your question correctly?
I found these ressources helpful:
You might have noticed that I never mentioned where CIFTI files store the location of each surface voxel. That’s because they don’t! This is because the surface can be “inflated” to varying degrees, from very “wiggly” (showing full anatomical detail) to completely flattened (showing no anatomical detail). Depending on the degree of inflation, the 3-dimensional location of each surface voxel changes! For a given degree of inflation, the x-y-z location of each voxel are stored as a different type of file, called a GIFTI. I’ll mention below how to read and navigate these files in MATLAB too.
[…]
The left- and right-hemispheres are stored in separate GIFTI files. The GIFTI toolbox can be used to read GIFTI files into MATLAB. After installing and adding the toolbox, mygifti = gifti(‘fname.surf.gii’) results in a MATLAB structure with several fields, including vertices, a VL x 3 matrix containing the x-y-z coordinates of each voxel, where VL is around 30,000, the number of surface voxels in one hemisphere of the brain. To map the voxels in a GIFTI file to the corresponding voxels in a CIFTI file, reference the brainstructure field: all voxels with brainstructure==1 (CORTEX_LEFT) map to those in a GIFTI file for the left hemisphere; all voxels with brainstructure==2 (CORTEX_RIGHT) map to those in a GIFTI file for the right hemisphere.
- Good tutorial to manipulate vertex 3D coordinates:
Loading and plotting of a cortical surface atlas - Nilearn
Thank you very much for the information!
I am still wondering what I should do for this specific case:
I have fMRI volumes in the MNI152NLin2009cAsym space, and want to use nilearn.surface.vol_to_surf
with the surface “fsaverage5” to extract the cortical activations. But from my understanding, the volume and the surface have to be in the same space, so that the surface can match those cortical voxels.
I am currently using fsaverage5[‘pial_left’] (and fsaverage5[‘pial_right’]) for this purpose. I wonder if this is correct? And I will also appreciate it if you can give me some general suggestions on this topic.
Thanks again.
I am not a user of nilearn.surface.vol_to_surf
. What you could do would be to use a software such as fmriprep that can output you data in both volume and surface spaces.
Thanks. I will give it a try. But if there is anyone else who used nilearn.surface.vol_to_surf
, I will appreciate any suggestions.
Here are two additional informative links for this question: