Hi! I am an MSc student and I am new to multi-echo fmri and neuroimaging in general. I have a set of data which consists of 3 echoes per participant. I wish to perform motion correction on all three echoes to the same image before combining the echoes with tedana. So far I have been unable to do so, as FSL will do motion correction to the first image of each individual echo. Any advice on how to proceed? I am unable to use fmriPrep at my institution, but I have nipype available. I have been unable to use afni_proc so far.
As of now, I think AFNi and fmriPrep are the only software pipelines that have default settings for multi-echo motion correction. In both cases, the motion correction translation and rotation parameters are calculated on a single echo for each volume and then the identical translation and rotation parameters are applied to the other echoes from the same volume. I’m fairly sure you can do this using command line functions in FSL, but I’m not a regular FSL users so I don’t know precisely which commands to use.
Hope this helps.
Look at examples 12 for multi-echo data processing: https://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html
Example 12a is just bringing the data through motion correction & registration to a template. The key parameters are, instead of using -dsets, use -dsets_me_run (with a * for the echo number in the file name). You can also use -reg_echo to define which echo registration will be calculated on.
I was working on this recently for an old project looking at multiecho data.
If you have access to Python and you are able to install the AntsPy package, then this code will do the motion correction for you:
from typing import List
echoes: List[ants.ANTsImage], mask: ants.ANTsImage
) -> Tuple[List[ants.ANTsImage], List[List[str]]]:
"""Given a sorted list of ants images, each a 4D image with a different
echo-time, this function will compute parameter estimates from the first echo
and apply to all echoes, returning both a list of motion corrected ants images,
and a list of the transformation files corresponding to the motion correction.
The motion correction is performed using a rigid transformation.
echo_0: ants.ANTsImage = echoes
mcf_0: dict = ants.motion_correction(echo_0, mask=mask, type_of_transform="BOLDRigid")
mcf_others = [apply_to_other_echo(x, mcf_0) for x in echoes[1:]]
return [mcf_0["motion_corrected"], *mcf_others], mcf_0["motion_parameters"]
def apply_to_other_echo(other_echo: ants.ANTsImage, mcf: dict) -> ants.ANTsImage:
"""Given a ants image (other_echo) and an ANTs motion corrected output
(dict with 'motion_corrected' and 'motion_parameters' keys), return the other_echo
motion corrected with the transforms from the first echo.
fixed_list = ants.ndimage_to_list(ants.image_clone(mcf["motion_corrected"]))
moving_list = ants.ndimage_to_list(other_echo)
transforms = mcf["motion_parameters"]
out: List[ants.ANTsImage] = [
ants.apply_transforms(fixed, moving, transform)
for fixed, moving, transform in zip(fixed_list, moving_list, transforms)
image_target: ants.ANTsImage = ants.image_clone(mcf["motion_corrected"])
return ants.list_to_ndimage(image_target, out)
And to use it you could call:
# echoes is a list with filenames, e.g.: ["echo1.nii", "echo2.nii"]
loaded_data = [ants.image_read(x) for x in echoes]
# Mask tells the motion correction to disregard noisy voxels outside the brain
mask = ants.get_mask(ants.get_average_of_timeseries(loaded_data))
list_mcf, motion_pars = multiecho_motion_correction(loaded_data, mask)
The list_mcf will contain ANTs image objects, which you can individually save to files with, for example:
sorry, my bad. I copy pasted the code from a script that does “optimal” echo combination using ANTs as a backend and the T2star fitting from the tedana package. Glad to hear its working now! If you find issues, let me know.
It may be worth making a new thread for this - but as @handwerkerd mentioned, FSL doesn’t provide GUI tools for multiecho processing. Your best bet would like be AFNI, with afni_proc, using Example 12. fmriprep is also an option but I am less familiar with that.
If you must use FSL, then one option (among many) is to
perform slice time correction on each echo, if desired
motion correction on 1st echo only
apply those parameters to the other echoes and then
combine (or denoise) the data using tedana.
You could then take the desired tedana output (combined data, denoised data, etc) and pass it through conventional FSL processing (turning off masking, motion correction, etc.