Thanks for reaching out ! And for the tag, @effigies !
The choice to not incorporate tedana denoising into fMRIPrep was a conscious one. Given that fMRIPrep is geared towards robustness – and our lingering concerns as to whether tedana denoising was (yet) able to produce robust results – we decided not to include it for now.
tedana denoising is currently drawn from ME-ICA denoising. So, if you have previously used ME-ICA and were happy with it in your own data, you should be happy (or even happier) with tedana. In our experience, however, ME-ICA denoising will systematically fail for certain types of data (block-design tasks immediately come to mind, here). That being said, we are actively working to improve the selection process, see for example: https://github.com/ME-ICA/tedana/issues/153.
We also have incorporated visualizations to help researchers inspect each component and determine whether or not it should have been accepted / rejected; you can then feed in the corrected selections using our
As you pointed out, we have developed workarounds to grab the correct files from the working directory. This is particularly useful for researchers who have had good results with ME-ICA. For that, I’d point to a script we started on grabbing the correct files from the working directory: https://github.com/ME-ICA/tedana-reliability-analysis/blob/master/collect_fmriprep.py
The files you’re grabbing should be unsmoothed with no highpass filtering, and you shouldn’t need to detrend the denoised data after tedana.
On a practical level, I strongly recommend using the optimal combination with multi-echo data. This method is already incorporated into fMRIPrep by calling tedana internally to run
t2smap. This is very robust and unlikely to change.
Regarding denoising, if you have had good experiences with MEICA and are willing to visually inspect your components, I think that tedana offers very powerful potential for denoising your data. If, however, you cannot commit to that or have had mixed experiences with MEICA in the past, I would recommend you use more traditional denoising methods on the optimally combined dataset.
Hopefully this clarifies things ! Please let me know if you have any other questions.