Data order for ME data and tedana

So we collected a pilot ME scan with 4 echos and are testing tedana.

Our dcm2nii produces an 800 time points file, with each chunk of 200 TRs being the data from a single echo.

So scans 1-200 are TE1, scans 201-400 are TE2, 401=600TE3, and 601-800 TE4.

Just want to make sur Tedana understands the data in that order? I can’t find specific info in the docs on expected inputs and file order.

: )

Colin

Hi @ColinHawco, thats somewhat surprising to me because I’ve never seen dcm2niix produce something quite like that - are you using the %e flag with dcm2niix? That should have it spit out separate echoes. Also, note that the data needs to at least undergo preprocessing (motion correction, at least) prior to being passed into tedana. Processing of multiecho has some specific requirements and I prefer afni_proc.py for this purpose.

Regarding the overall question, some notes on usage can be found here. There are two main ways to input data into tedana, a ‘legacy’ format that allows for Z concatenated echoes and the more common one - each echo as its own separate nii(.gz) file. So, typically you would do something like -d echo1.nii.gz echo2.nii.gz echo3.nii.gz -e 15.0 39.0 63.0.

The precursor of tedana used to take in raw data in odd formats, but this is no longer the case. Hopefully this clears some things up.

Its a GE aquisition, no sure why dcm2nixx stacks that way, but using %e doesn’t seem to help. However, it looks pretty clear we need to split the files prior to tedana in order to be sure it processes correctly.

Our group uses fMRIprep so thinking through how to split the processing and especially MC, specially how to make sure we apply the exact same motion transform to each echo. But we’ll figure it out, lots of ways to fight a dragon.

Thanks for the tips Logan @dowdlelt

That makes sense, GE info is apparently a struggle sometimes. You’ve got the right idea though regarding splitting the files. fmriprep should work well and handle the mutliecho data appropriately, though search around on here because there have been a number of questions regarding applying denoising vs just getting combined echoes. Biggest challenge may just being generating useful json files with appropriate echo times after you split the files. Good luck