Multi echo data motion correction

Hi! I am an MSc student and I am new to multi-echo fmri and neuroimaging in general. I have a set of data which consists of 3 echoes per participant. I wish to perform motion correction on all three echoes to the same image before combining the echoes with tedana. So far I have been unable to do so, as FSL will do motion correction to the first image of each individual echo. Any advice on how to proceed? I am unable to use fmriPrep at my institution, but I have nipype available. I have been unable to use afni_proc so far.

Thanks in advance!!

As of now, I think AFNi and fmriPrep are the only software pipelines that have default settings for multi-echo motion correction. In both cases, the motion correction translation and rotation parameters are calculated on a single echo for each volume and then the identical translation and rotation parameters are applied to the other echoes from the same volume. I’m fairly sure you can do this using command line functions in FSL, but I’m not a regular FSL users so I don’t know precisely which commands to use.
Hope this helps.

2 Likes

Thank you so much for your response! Do you happen to know where I can find any afni / afni_proc scripts for multi echo preprocessing ? I can’t seem to figure it out myself ):

Look at examples 12 for multi-echo data processing: https://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html
Example 12a is just bringing the data through motion correction & registration to a template. The key parameters are, instead of using -dsets, use -dsets_me_run (with a * for the echo number in the file name). You can also use -reg_echo to define which echo registration will be calculated on.

Cool! Thanks so much!

1 Like

I was working on this recently for an old project looking at multiecho data.
If you have access to Python and you are able to install the AntsPy package, then this code will do the motion correction for you:

import ants
from typing import List


def multiecho_motion_correction(
    echoes: List[ants.ANTsImage], mask: ants.ANTsImage
) -> Tuple[List[ants.ANTsImage], List[List[str]]]:
    """Given a sorted list of ants images, each a 4D image with a different
echo-time, this function will compute parameter estimates from the first echo
and apply to all echoes, returning both a list of motion corrected ants images,
and a list of the transformation files corresponding to the motion correction.
The motion correction is performed using a rigid transformation.
    """
    echo_0: ants.ANTsImage = echoes[0]
    mcf_0: dict = ants.motion_correction(echo_0, mask=mask, type_of_transform="BOLDRigid")
    mcf_others = [apply_to_other_echo(x, mcf_0) for x in echoes[1:]]
    return [mcf_0["motion_corrected"], *mcf_others], mcf_0["motion_parameters"]


def apply_to_other_echo(other_echo: ants.ANTsImage, mcf: dict) -> ants.ANTsImage:
    """Given a ants image (other_echo) and an ANTs motion corrected output
(dict with 'motion_corrected' and 'motion_parameters' keys), return the other_echo
motion corrected with the transforms from the first echo.
    """
    fixed_list = ants.ndimage_to_list(ants.image_clone(mcf["motion_corrected"]))
    moving_list = ants.ndimage_to_list(other_echo)
    transforms = mcf["motion_parameters"]

    out: List[ants.ANTsImage] = [
        ants.apply_transforms(fixed, moving, transform)
        for fixed, moving, transform in zip(fixed_list, moving_list, transforms)
    ]
    image_target: ants.ANTsImage = ants.image_clone(mcf["motion_corrected"])
    return ants.list_to_ndimage(image_target, out)

And to use it you could call:

# echoes is a list with filenames, e.g.: ["echo1.nii", "echo2.nii"]
loaded_data = [ants.image_read(x) for x in echoes]
# Mask tells the motion correction to disregard noisy voxels outside the brain
mask = ants.get_mask(ants.get_average_of_timeseries(loaded_data[0]))
list_mcf, motion_pars = multiecho_motion_correction(loaded_data, mask)

The list_mcf will contain ANTs image objects, which you can individually save to files with, for example:

img = list_mcf[0]
img.to_filename("echo1_mcf.nii.gz")
3 Likes

No way! That’s amazing, thank you so much!!! :snail: :brain:

Hi again, sorry to be annoying! but when i try to run the script it tells me name ‘tuple’ is not defined, but it seems to me like it is being defined?

Thanks again!

It looks like he’s using it as a type hint, in which case you can edit the typing import at the top to:

from typing import List, Tuple

Woah this is so wild I just put your name in my reference list for nipype! Yeah seems to have fixed it, thanks a lot!! :avocado::avocado::avocado:

1 Like

sorry, my bad. I copy pasted the code from a script that does “optimal” echo combination using ANTs as a backend and the T2star fitting from the tedana package. Glad to hear its working now! If you find issues, let me know.

1 Like

This is a great script, thank you very much!!! Will let you know if i find any issues :slight_smile:

Dear all,

I run resting state using multi-echo (Minnesota) at 7T (4 echos).
Any advise of how to use FSL and tedana will be much appreciated!

Tali

Tali_Weiss,

It may be worth making a new thread for this - but as @handwerkerd mentioned, FSL doesn’t provide GUI tools for multiecho processing. Your best bet would like be AFNI, with afni_proc, using Example 12. fmriprep is also an option but I am less familiar with that.

If you must use FSL, then one option (among many) is to

  1. perform slice time correction on each echo, if desired
  2. motion correction on 1st echo only
  3. apply those parameters to the other echoes and then
  4. combine (or denoise) the data using tedana.

You could then take the desired tedana output (combined data, denoised data, etc) and pass it through conventional FSL processing (turning off masking, motion correction, etc.

MANAGED BY INCF