How to align images in various MNI spaces(images with different size)?

Hello,
I am confused by MNI space recently.
I used the fmriprep to preprocess my fMRI, the output image is registered to MNI152NLin2009cAsym. The 2mm image is 9711597, while the size of my previous images is usually 9110991.
I would like to known that are my previous images in MNI152Lin space? If so, how could I convert my previous data(images and atlas) to the MNI152NLin2009cAsym?

It seems like the format is wrong. The 2mm image is 97* 115* 97, while the size of my previous images is usually 91 * 109 * 91.

This isn’t a problem, it just means the image had to be scaled in the y-dimension to conform with the standard space. To confirm, you can plot two different subjects MNI images against each other. Assuming the scanner acquisitions are the same, you should see that they share the same dimensions and overlap.

Steven

Thank you very much!
I wonder if I can use the rasmaple_to_img in nilearn to resamle my previous data to the new size?

Sure, that should work. Why do you want to resample your native space raw data to MNI though>

Because there are some atlas in MNI space, I want to use it in my data. Or there are any way else?

Depends on what software you end up using, but I think in most cases, as long as your BOLD and atlas spaces are spatially aligned, you don’t need to do any resampling.

I just still confused with this. For example, if my BOLD image is 3.125mm, but usually the atlas is 1mm and 2mm.
I usually use the python to extract timeseries according to atlas. Because the image and atlas have different size, the matrix of them is not consistent so that I can not use numpy/nilearn to calculate. So, I tried to preprocess my BOLD image into 2mm to use the altas. Is there a better way? Or I have made a wrong description.

I believe Nilearn maskers account for differences in spatial resolution.

Oh, I see what you mean! Thank you very much! :grinning: :grinning: :grinning:

MANAGED BY INCF