We are running an ASL/BOLD multi-echo multiband sequence over a TR of 4.115 seconds.
We would like to extract the ASL signal with the best tSNR from the BOLD signal (label vs control).
In the litterature, the usualy filter and demodulate one echo, but we were wondering if it would be possible to use the 4 echos of the label and echos to gather as much signal as possible.
Something like tedana would be marvelous.
This sounds like a good idea, but I’m not sure if anyone has specifically tried it yet and I don’t keep up with the ASL literature, but I doubt there is not public software that could do it for you. As a first pass, I’d lean away from ICA-based denoising because I the out-of-the box methods of selecting good vs bad components likely won’t work for the tagged volumes.
You probably could do the weighted average of the echoes (i.e. Poser et al 2006 https://doi.org/10.1002/mrm.20900 ) but you might want to give some thought about what weightings optimize for what your want in the tag & control image. Any consistent weighting would likely be better than not using all 4 echoes.
Hope this helps
Hello, thank you for sharing your thoughts.
We are trying ICA denoising, but tedana doesn’t converge once the echos are filtered. I’ll have to check on this.
Regarding the optimal combination of ASL, we were tinking of giving same weights on label and control.
The best option would be to fit an exponentiel curve I guess.
As a start, what do you think of simply averaging the filtered/substracted echos.
It would look like
1/4(echo1LBL-echo1CTRL + echo2LBL-echo2CTRL +…+ echoXLBL-echoXCTRL).
It is a broad approximation, but a start.
One of the question we are wondering is on which volumes to mean the echos.
Should we mean on raw ASl, or filtered volumes (CBF), or once the echos have been regressed ?
Thanks for any advice.