Transforming Study Forrest (fMRI) data to MNI space

I have been working with the retinotopic-mapping data from the Study Forrest project (thanks for providing this data, @eknahm!). Specifically, I’m using the data from the studyforrest-data-aligned repository. I now want to transform an image that I’ve created based on this data to MNI space. But I have a few (newbie) questions:

  1. I understand that the ‘aligned’ data is in a participant-specific space, which is different for every participant. Is that correct?
  2. To plot the data such that it maps correctly onto the brain (for example with nilearn.plotting.plot_glass_brain()), I have to transform it to MNI space. Is that correct?
  3. The transformations to do so are already available (i.e. as part of the Study Forrest dataset). But where are they? And how do I use them?

Any help would be much appreciated!

I understand that the ‘aligned’ data is in a participant-specific space, which is different for every participant. Is that correct?

Yes, that is correct. In that dataset all data are aligned to a subject specific template image that was computed from all scans of a similar kind that were available at that time (e.g. all 3T BOLD scans of similar coverage and orientation).

If you are interested in the details of how exactly the alignment was computed, this file is a good starting point: https://github.com/psychoinformatics-de/studyforrest-data-aligned/blob/master/code/avmovie_motion_correction.submit

and here is the core alignment/reslicing helper: https://github.com/psychoinformatics-de/studyforrest-data-aligned/blob/master/code/mcflirt_2stage

To plot the data such that it maps correctly onto the brain (for example with nilearn.plotting.plot_glass_brain()), I have to transform it to MNI space. Is that correct?

I believe that is correct too. I thought it was somehow possible to provide a transformation to the function to avoid reslicing, but you need to supply a transformation in any case.

The transformations to do so are already available (i.e. as part of the Study Forrest dataset). But where are they?

The template images and transformation can be found at https://github.com/psychoinformatics-de/studyforrest-data-templatetransforms/

For example the study-specific group-template image for the 3T scans is this one:

templates/grpbold3Tp2/brain.nii.gz

and one of the subject-specific template images (the target images for the “aligned” data) is this one:

sub-01/bold3Tp2/brain.nii.gz

The transformation between the image spaces (and their inverse forms) can also be found in this directory structure.

Here is an example:

sub-01/bold3Tp2/in_grpbold3Tp2/subj2tmpl_warp.nii.gz 

This is the warpfield for the transformation of data from “sub-01” (first subdirectory: source space) that was acquired in its ‘bold3Tp2’ raw data space and transformed into the ‘grpbold3Tp2’ template space (second subdirectory: target space) . The directory with this file will also have the inverse transformation and an example transformed file that gives the respective reference space dimensions and resolution.

And how do I use them?

Here it becomes a little more tricky and software-specific. All transformation were computed using tools from FSL and are stored in its data formats. Simple affine transformations in its text-based .mat file format. Non-linear transformations, such as the one above, are stored in FNIRT’s warpfield file format (that contain both, an initial affine transformation in the NIfTI header, and the actual warpfield. Depending on what tools you have available, and what specific transformation you want to perform, things will be simple, or less so.

For your specific intention to get data from “aligned” subject image space into an MNI-like image space you have two choice.

  1. Use FSL’s applywarp to compute a resliced image that has the non-linear transformation applied. You should be able to open that with the nilearn plotting function. Here is the call:

    applywarp \
      -i <file in subject aligned space> \
      -r templates/grpbold3Tp2/brain.nii.gz \
      -w  sub-01/bold3Tp2/in_grpbold3Tp2/subj2tmpl_warp.nii.gz \
      -o <output filename>
    

    This works, because the given reference image (-r) is also in MNI space.

  2. You ignore the non-linear part of the alignment, and extract the affine transformation from the NIfTI header of the warpfield image (via NiBabel: img.get_affine()) and pass that to the plotting function. This would be (subjectively) less accurate, but generally good enough for plotting on a glass brain.

I hope this sheds some light on the topic. And thanks for asking!

1 Like

Thank you! I installed FSL through NeuroDebian and used the applywarp solution. I’m still a bit hazy on what this does exactly, but it’s very straight-forward to do and works like a charm.

I can now confirm that the right hemisphere indeed responds to visual stimulation in the left visual field. :slight_smile:

Awesome! That for the confirmation.