Compare two protocols for TEDANA

Thanks @dowdlelt!

  1. To compare the different protocols I calculated the within-network connectivity (should be high) and between-network connectivity (presumably, should be low) for each protocol, after running tedana+xcpd with aCompCor regressors. For the single-echo scans I only applied xcpd. The two colore are two different subjects, and the single-echo scans are the first 4 bars.

se - single echo
me - multi-echo
mb - multi-band
grp - grappa
pf - partial Fourier
ff - full Fourier

Unfortunately, based only on this sample, it doesn’t seem like the mean+std of these measures improves much. Perhaps for the orange subject it does improve slightly between single-echo and multi-echo scans. Would you expect greater effects? Or is it too hard to say based on so few scans?

  1. I didn’t think about the time SBRef takes. You meant that noise might still leak into the data via motion during the SBRef scan, correct?
    I hope this is not off-topic, but I don’t know what you consider to be low-motion for subjects. Here are some histograms shwing subject counts of several summary statistic of Framewise Displacement: in our data:

While the maxiam DF can get quite large, the median is always below 0.3mm.
Would you consider these to be low-motion?

Thanks for sticking here with me :smiling_face_with_tear:

Summary measures are nice, but as they say, garbage in, garbage out - So it could be that tedana denoising didn’t work well, or that your echoes were still too short. Things like number of echoes, last echo time, etc could all influence quality of tedana processing and results. I also think its too little data to be conclusive - but it is enough data for me to think that multiecho isn’t doing harm, which is important. Even if tedana dneoising performance isn’t perfect, that can also be adjusted for better performance - but you’ve got to have the multiecho data to start with!

Honestly though, I think this is kind of pursuing the wrong kind of final details, and its tricky to compare these things. For example: Anything with partial fourier is going to be slightly smoother, likely leading to higher correlations versus a FF scan. The full fourier scan will also have more data in hard to image regions, so potential more (but slightly noisier data) could be included.

GRAPPA is going to change distortion as well (its lower with it on), so if distortion correction isn’t applied there could be other issues with comparing.

In general, I think that 1) more echoes, and 2) echo times out to ~65ms are more important than trying to avoid GRAPPA 2, especially when using something like FLEET that is robust to that motion.

My advice, which you absolutely don’t have to follow: I think you want GRAPPA 2, and then 4 or 5 echoes out to 60ms or so. I think you can still hit your 1s target with full fourier. I know that previous data was a little odd with the noise component being left in the data, but that sort of thing is obvious to remove, and tools exists to make it easy (RICA for tedana outputs).

I just say you that you had mentioned in an earlier post that the adaptive mask had signal for all echoes - that is good - but it also means that your echo times are kind of short - even the ventral temporal lobe didn’t have enough time to lose signal! Could also explain why curve fit had different behavior - shorter echo times don’t show the full decay curve. Just a theory though.

That motion in general looks relatively low to me - all motion is bad, but that doesn’t frighten me. And yes - the worst time for someone to move is during the initial prep time of the scanner - thats when its getting the calibration scans for GRAPPA and then the SBRef and then it gets a few scans that are usually automatically discarded before starting data collection. Movement during those first two is going to be worse than movement during the scan. I think the best you can hope for is to remind participants to do their absolute best to stay still especially at the beginngin, and warn them right before scanner noises start so it doesn’t startle them.

Thanks, @dowdlelt, your responses are always so insightful!

I’m collecting a pair of short AP-PA scans prior to each run for distortion correction using top-up (I’m collecting ~10 secons of each phase encoding direction, but perhaps that’s too much and I could reduce this and use the SBRef of each phase encoding direction separately?).

I actually already collected some data with GRAPPA2 and 4-5 echos (below are results for a protocol with 7/8 partial Fourier, MB 4, GRAPPA 2, TR-1090 ms, 3.2 mm voxels and TE=13, 27.6, 42.2, 56.8, 71.4 ms). I was surprised that the adaptive mask still had signal for all echos, even for TE=71.4 ms. I think that the issue is that the mask itself is given within the initial EPI brainmask, which excludes regions of low signal like OFC and some temporal regions. See for example here, where tedana’s S0map is overlaid on top of the T1w image:

You can also see that S0 is badly estimated at the OFC (as evident by the very high esimation values there).

I therefore reran tedana, but this time I provided it with a mask which is the union of fmriprep’s EPI brain mask and the anatomical brain mask (after regridding to EPI space). This obviously resulted in greater coverage in the S0 map:

I’m not surprised by the low S0 values there, given how low the signal is even for the first echo (13 ms):

But what surprises me is the adaptive signal mask is still perfect, and equal to 5 throughout the brain (except for a singla posterior voxel equal to 4). I don’t know what I’m missing, but that doesn’t sound quite right.

I chose an arbitrary voxel in the OFC, where the signal seems very low, and plotted the signal against the TE. Given how the linearity breaks in the log(signal) plot it seems plausible that the 5th echo would not be considered as good signal here. But perhaps the conservative way tedana calculated the adaptive signal mask still considers this “good”. Finally, there’s always the possibility that this is a voxel with non mono-exponential decay, especially given the higher probability of partial volume effects with 3.2 mm voxels.

Given that the adaptive signal mask considers these voxels “good”, would you say that combining the EPI mask with the anatomical one is a wise step, or is it too risky in terms of contaminating the data?

Thanks again for your invaluable input!

I wouldn’t worry too much about contamination - my intuition is that a mask having 5~10% more voxels (if that!) isn’t going to really break anything. I prefer to provide a liberal mask, and let tedana cut things down as it needs to.

I’m surprised about the adaptive mask bit - I’d have to peak at the data, and even then I wouldn’t be sure. I definitely wouldn’t stress over it, but it might be something we need to take a look at. If the values are so low, then they shouldn’t have a huge negative impact on the fit, but fitting to noise would be kind of annoying.

Regarding those S0 values - you’ve hit on one fact - monoexponential isn’t perfect - but even more than that, its just hard to fit the model in those regions, they are “extreme” areas. The good news is again, nothing has to be perfect for things to work well. Given that this is EPI imaging, its doesn’t need to be perfectly quantitative, just a good enough approximation for subsequent steps to work well enough.