How to transform mask from MNI to native space using fmriprep outputs?

I have a mask of visual cortex created in MNI space. I pre-processed my fMRI data with fmriprep. Now I want to get my mask from MNI to native (functional BOLD) space. How can I make use of the fmriprep outputs to achieve this? Previously, I combined the inverse of the co-registration from anatomical to MNI space and the inverted transformation matrices of the co-registration from functional to anatomical space using FSL’s applywarp. Thanks in advance for your help and please let me know if you need additional information.

PS: Relatedly, can anyone point me to “good” masks of visual cortex in MNI space? I would be interested to compare my mask to masks used in other studies. Thanks!

1 Like

The easiest way is to use the FMRIPREP outputs that are already in the MNI space. All you will need to do is to resample the mask to the same shape and voxel size (for example using

If you really insist on moving the mask to the participant T1w space (FMRIPREP outputs BOLD data already aligned with participant T1w) you can use ants to apply the _warp.h5 file from the anat folder. See for more info.

Thanks @ChrisGorgolewski for your response!

After pre-preprocessing with fmriprep we will proceed with a multivariate decoding approach, thus we would like to use well but minimally pre-processed data that did not undergo warping into a different space (which potentially obscures relevant pattern information). I hope that makes sense. Am I correct in assuming that data corresponding to --output-space fsnative would thus be the “correct” data for us to proceed with? If yes, how could a transformation of a mask from MNI to that space look like? Thanks! (and sorry, if something is unclear as I am relatively new to this topic)

fsnative is surface data, if you are looking for native volume look for T1w

Alright, I got it. Thanks, @ChrisGorgolewski!

Hi @ChrisGorgolewski,
thanks again for your help!

The only *.h5-files I could find in the anat folder of my FMRIPREP output are sub-01_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5 and sub-01_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5. Are these these other files than the ones your were suggesting? Thanks!

Ah yes. It’s the new naming convention. This are the files you are looking for.

Hi @ChrisGorgolewski,

Okay, thanks just wanted to make sure.

I followed your suggestion regarding how to move the mask from MNI to T1w space:

Here’s how I did it in a Nipype script:

ApplyTransforms Node

from nipype.interfaces.ants import ApplyTransforms
transform = pe.Node(interface=ApplyTransforms(), name='transform')
transform.inputs.dimension = 3
transform.inputs.float = False
transform.inputs.interpolation = 'NearestNeighbor'

Part of workflow

wf.connect(selectfiles, 'vismask', transform, 'input_image')
wf.connect(selectfiles, 'transforms', transform, 'transforms')
wf.connect(selectfiles, 'anat', transform, 'reference_image')
wf.connect(transform, 'output_image', datasink, 'transform')

where vismask is the mask over visual cortex in MNI space, transforms is the *.h5-file and the reference_image is the subject’s T1w image. This is what the executed command looks like:

antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 \
 --input /path/to/mask/vismask.nii --interpolation Linear --output vismask_trans.nii \
--reference-image /project/derivatives/fmriprep/sub-01/anat/sub-01_desc-preproc_T1w.nii.gz \
--transform /project/derivatives/fmriprep/sub-01/anat/sub-01_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5

So it works. I get a mask that is reasonably transformed into subject space (Note: The mask in MNI space still needs to be improved and obviously it not only covers visual cortex…):


Anyway, now I have yet another problem: When I load the functional data into the mask in Nilearn it takes very long (which is okay, since it’s a lot of data) but eventually crashes. I use the NiftiMasker like this:

masker = NiftiMasker(mask_img = path_mask)
masked_data = masker.fit_transform(path_func)

I don’t get this problem when I use the FMRIPREP output *space-T1w_desc-brain_mask.nii.gz as a “T1w whole brain mask”.

Any ideas? Thanks!

Glad it works. The new problem seems like a material for a new post tagged with ‘nilearn’ to highlight it to their team. I would also include information about voxel size of the two masks.

Alright! Thanks @ChrisGorgolewski for your efforts!

Hey @lennart ,

so I have been trying the same process as you have (move a brain mask into the native space of my participant), but nipypes.ants.ApplyTransforms doesn’t produce any output file, and I am not sure why:

at = ApplyTransforms()

at.inputs.input_image = analysis_data_dir + “Brain_masks/juelich_prob_GM_Hippocampus_entorhinal_cortex_R.nii.gz”
at.inputs.reference_image = imag_data_dir + ‘Derivatives/sub-01/anat/sub-01_desc-preproc_T1w.nii.gz’
at.inputs.output_image = analysis_data_dir + “Brain_masks/GM_Hippocampus_entorhinal_cortex_R_registered.nii.gz”
at.inputs.transforms = imag_data_dir + “Derivatives/sub-01/anat/sub-01_from-MNI152NLin6Asym_to-T1w_mode-image_xfm.h5”
at.inputs.interpolation = ‘Linear’
at.inputs.default_value = 0
at.inputs.dimension = 3
#at.inputs.transforms = [imag_data_dir + ‘Derivatives/sub-01/anat/sub-01_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5’]
#at.inputs.invert_transform_flags = [False, True]


‘antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /Users/badwal/Desktop/Study_Analysis/Brain_masks/juelich_prob_GM_Hippocampus_entorhinal_cortex_R.nii.gz --interpolation Linear --output /Users/badwal/Desktop/Study_Analysis/Brain_masks/GM_Hippocampus_entorhinal_cortex_R_registered.nii.gz --reference-image /Users/badwal/Desktop/Study_ImagData/Derivatives/sub-01/anat/sub-01_desc-preproc_T1w.nii.gz --transform /Users/badwal/Desktop/Study_ImagData/Derivatives/sub-01/anat/sub-01_from-MNI152NLin6Asym_to-T1w_mode-image_xfm.h5’

I do get the output printed, but there is no output file created.

Would much appreciate a hint :slight_smile:

@ChrisGorgolewski maybe you can see the obvious problem here.