Map Talairach Nifti to world coordinates(RAS+ space)

In continuation to my previous question. I can get the Talairach Atlas using the nilearn function fetch_atlas_talairach.html , which has orientation info in the header as:

qform_code : unknown
sform_code : aligned

Since sform_code is 1, can I just apply affine to convert each voxel coordinate to world coordinate?

The idea what I am trying to explore is to get the brodmann areas for my NIFTI from the Talairach Atlas, since both images will be in RAS+ space, can I just get the labels for my input NIFTI coordnate, just by getting the corresponding label from Talairach NIFTI for the same coordinate. The confusing part is that my input image has shape (64, 64, 32) whereas Taliarach image has shape (141, 172, 110) , so I think since the two brains are not of the same size, there might be an issue with above approach. Please suggest if I am thinking in right direction. If the brain size difference is an issue, can someone suggest how to make them the same size.

As a rule, if you haven’t aligned two images, you can’t count on “world” coordinates to refer to the same space relative to the brain. I don’t know to what space the “Talairach Atlas” in question has been registered, or whether you’ve registered your image to that space. If you can’t confidently answer the first question and are sure that it’s the same space you’ve registered your image to, then you need to work on that before you can start to map locations in one image to the other.

That said, once you have your image registered into the same space as your Talairach atlas, the two affines will each describe a mapping from their voxels onto a common RAS space, and you can map from voxels in B to voxels in A by taking inv(A.affine).dot(B.affine).dot([[i], [j], [k], [0]]). (These are unlikely to be integers, so you’ll want to consider your rounding function.)

Another thing you can do is to use something like mri_vol2vol to map from your Talairach atlas to your input image space, and using --interp nearest to handle the decision of which voxels match to which. Then you just open both files, and the same voxel location in each matches the same RAS location. (This does, of course, assume that your images have already been registered, as in the previous paragraph.)

@effigies, I am a new comer to the field so apologies in case my question seems too basic.

How do we figure out to which space a NIFTI have been registered to? As far as I understood from Orientation Information , that NIFTI has standardized the orientation info.

Can you provide some reading about this topic as what you have written in 2 & 3 pargaraphs above seems confusing to me.

It might be better to say that NIFTI standardizes the storage and interpretation of orientation information, but can’t eliminate the problem that there isn’t one universal space for all data to be in. Different templates are based on different reference brains, and may make different choices for where their origin is. Having an image in NIFTI means you generally know which direction left/right, posterior/anterior, inferior/superior are, which makes it possible to register it to another brain where you also know these things.

As a rule, if there isn’t documentation that says what something’s been registered to, and you didn’t register it yourself, then you just have to assume they’re out of register and do your own registration.

I would approach it from this direction: Find an atlas you want to use, and read its documentation to find out what template it’s aligned to. Then find a copy of that template, and register your image to that.

@effigies, I am getting the Talairach.nii form here, it seems like a functional image and I tried to search for its anatomical image and as per results there does not exist one. I also came across the term spatial normalization, is it possible to do spatial normalization between 2 functional images?

if not can you suggest, if I can use any other template, then how do I figure our the brodmann areas?