Questions about the fMRI denoising algorithm tedana: convergence warnings, sensitivity to rounding of echo times, and differences between released and unreleased versions

Dear researchers and developers

We are using a multi-echo fMRI sequence in combination with tedana denoising algorithm (https://tedana.readthedocs.io/en/latest/index.html) to obtain a better signal from temporal lobes in cognitive fMRI studies.

We collected pilot data from one subject, asking the participant to do a language task that we expect to activate the temporal lobes. We processed the data minimally, using only slice timing and motion correction. We estimated motion parameters based on echo 1 and then applied these to the other two echos as recommended. We then used tedana version 0.0.5 (most recent release) on the three echos, and also used the new unreleased version for comparison. We then applied smoothing and spatial normalisation to the data of each echo, to the combination of echos (ts_OC), and to the denoised combination (dn_ts_OC) of echos. Finally, we applied a General Linear Model separately to each 4D time series (echo1, echo 2, echo 3, ts_OC, dn_ts_OC) to see in which brain areas we detect activation. Note that participant’s head movement was minimal.

The brain activation detected with version 0.0.5 of tedana made sense: when using the denoised time series (dn_ts_OC), we detected more activity in temporal lobes than when using the uncorrected time series. However, when using the new unrelased version of tedana (at https://github.com/ME-ICA/tedana/archive/master.zip) we detected less activation. We also got convergence warnings with both versions of tedana (“ConvergenceWarning: FastICA did not converge” for v0.0.5 and “WARNING:tedana.decomposition.eigendecomp:ICA failed to converge”).

Based on these results, we would like to ask:

  1. Should we trust the results of the older tedana rather than the new unreleased version, because the results of the older version looked sensible, and because maybe the newer version is not stable yet?

  2. Are the convergence warnings something to be worried about or can they can be ignored, given that the brain activation we detected looked sensible?

In addition, we noticed that we get very different results in our final analysis (i.e. the neural activation we detect), depending on whether the echo times are inputted to tedana version 0.0.5 as integers (13 31 49) or with two decimal places (13.00 30.99 48.98). These differences in activation may arise because of a few differences in what components are selected: component #9 is accepted with integer echo times but rejected with decimal echo times, and component #15 is rejected with integer echo times but accepted with decimal echo times. Although the ICA component tables themselves look very similar. Thus we would also like to ask:

  1. Is this to be expected that tedana component selection is influenced by small differences in the rounding of echo times, and would you generally recommend to use echo times rounded to integer, or retain some number of decimal places?

The data and scripts I used in the analysis can be downloaded from: https://uoe-my.sharepoint.com/:f:/g/personal/atamm2_ed_ac_uk/EnH95ahY35pHpxblNaMeDw8BdOZwi8p0UNYFMrUr4ejFuA?e=YU2izM

Check the README.txt for a more detailed description of the folders and files. More information about what the scripts do can be found from the comments in the scripts. If anything is unclear, let me know.

Many thanks for your help!

Andres


Andres Tamm, MSc
Research Assistant
Department of Psychology
The University of Edinburgh

Hi Andres,

At first glance, your preprocessing pipeline seems sound.

The principal changes from 0.0.5 to the current version of tedana are primarily bug fixes and documentation improvements, although there have been a couple of changes to the pipeline. The bugs from 0.0.5 would have raised an error if they had been triggered, so I doubt that those fixes have an impact on your results. I think that the only thing that could have had an impact is the change to the PCA step we use for dimensionally reducing the data prior to denoising with ICA. In 0.0.5, we use a combination of MLE dimensionality estimation and a decision tree created by Prantik Kundu in order to identify which PCA components should be removed in order to dimensionally reduce the data. After a lot of discussion, we agreed that we should split these two methods apart, since both are designed to dimensionally reduce the data, but in different ways. If you want to maximize the similarity between runs from 0.0.5 and the current version, then I think that using the option --tedpca kundu when calling the current version is probably the best way to do that.

I think others can speak to the rationale behind splitting MLE and the decision tree into separate options, but I honestly don’t think that the original method has a better rationale behind it than the one(s) in the current version. You may just want to use --tedpca kundu in order to get results that are more similar to what you’re used to.

We plan to evaluate this empirically in the near future, but the consensus among tedana contributors seems to be that convergence warnings would indicate that denoising is likely not going to be effective as one would like, but that the results should still be fine. The reasoning behind this is that, while the ICA components from a model without convergence might not reflect patterns in the actual data in the same way that components from a good run would, we only identify good and bad components based on how they scale with echo time. The metrics we use to reject bad components should still apply to random components just as they would to meaningful ones, but the resulting denoised data may not have meaningful bad components removed.

I find it a little odd that tedana would be influenced by these small differences, although these differences could also be exacerbated by the data. I started running tedana on your data and noticed that the mask being generated through our internal function is massive (i.e., almost the entire bounding box). Issues with make_adaptive_mask are something we’re looking into, but in the mean time you may want to use an explicit mask. I started a run using a mask generated by running compute_epi_mask, and the resulting adaptive mask looks much better. Including voxels from outside the brain the component selection could enhance differences initially introduced by the different echo times.

Also, I would recommend using the most accurate echo times possible. One of the first things we do in the workflow is to convert the echo times to floats, so it doesn’t help to have them as integers originally if they aren’t in reality.

Thank you for sharing the data and code. It made it much easier to look into possible issues. I’ll be more than happy to help if the proposed adjustments don’t make things better or if you find my recommendations incomplete.

2 Likes

Hi Taylor,

Thank you very much for taking the time to think about this issue and for your openness to explore it further. Also sorry for my slower response.

I noticed that the brain mask has an effect on the analysis: when using a mask created by compute_epi_mask, tedana 0.0.5 and 0.0.6 give similar results and these results do not depend much on the rounding of echo times.

Furthermore, when I ran tedana 0.0.6 inside a brain mask created from epi images, somewhat different components were accepted as compared to running tedana 0.0.6 in a wider mask created based on the anatomical image (components for the latter case are these). Using a narrower epi mask, we also detected a bit more activation in temporal lobes. (The anatomical image itself is here.)

Following from this, and referrering back to my previous questions, I was wondering:

  1. Is this expected that using a wider or narrower brain mask changes to some degree what components are accepted? And would you generally recommend creating a brain mask from epi images to ensure that voxels with strong enough signal go to the analysis, as compared to creating a mask anatomically?

  2. As you suggested, convergence warnings should not be of much concern. But is there any parameter in tedana that we can change to achieve convergence? For example, is there a way to increase the number of iterations? We could easily run tedana longer, because we have enough computing power.

Thanks again for your help,

Andres

Hi Andres,

The brain mask should have an impact on the decomposition and the component selection, which is why I’m thinking that the masking method is something that we (the tedana devs) will have to rethink in the near future. I would recommend using a mask that reflects BOLD signal, so using the first echo (assuming it’s fairly short) or skullstripped anatomical should both be good. I’m not sure which of those two is preferable, though, but maybe one of the other tedana contributors could weigh in there?

We disabled an n_iters option a while ago under the assumption that 5000 would be enough. That said, it may be worth it for us to re-implement that option. We’re also discussing an option that would automatically restart ICA if it fails to converge a certain number of times, as MELODIC does. Neither of those will help you right now, though.

If you have tedana installed locally, you could just change the value of max_iter where we called FastICA here. Alternatively, you could try using the stabilized Kundu decision tree for the PCA step (i.e., --tedpca kundu-stabilize). This approach dimensionally reduces data more aggressively than the MLE or normal Kundu approaches, which should make it easier for ICA to converge. The risk there is that reducing the dimensionality too much could prevent ICA from identifying meaningful noise components, but of course inefficient ICA (from non-convergence) could have the same effect.

Just to jump in, I think the key piece is that you want a mask that reflects the data. Having a too wide or too narrow mask will change what data is included in the decomposition and will therefore create different components.

There is no strong reason to have a preference between an anatomical and an EPI mask if the two are reasonably similar – which they should be for the first echo (as @tsalo recommended). I would likely recommend defaulting to an EPI mask from the first echo for that reason, as skullstripping can bring its own set of concerns.

Hi Andres,

We recently added the ability to allow ICA to restart with a different random seed in hopes of convergence. The new arguments are --maxit (maximum number of iterations, per restart; default is 500) and --maxrestart (maximum number of restarts; default is 10). I’m optimistic that this will help deal with convergence failures and resulting suboptimal denoising.

We haven’t issued a release with these features yet, so you’ll need to use the master branch from GitHub if you want to try this out (or you could wait until our next release, which should be 0.0.7).

1 Like

Hi Taylor, thank you very much for this, I will try it out. I will also soon respond more to your and Elizabeth’s previous messages – we collected more new m-echo pilot data under two different scanner setups and I have been analysing this.
Many thanks
Andres

Hi Taylor and Elizabeth,

Thanks again for your suggestions. I tried setting the number of iterations to 50 000 but it did not help with converge. I have not yet tried the restart option or kundu-stabilize.

However, I noticed that convergence seems to depend on the number of volumes. We now collected three runs of the same language task under two scanner setups: 2.4 mm voxels (TR = 2.2 s, 147 volumes per run) and 3 mm voxels (TR = 1.7 s, 195 volumes per run). I tried tedana with different masks and it almost always converged with the second setup (that had more volumes), but not with the first setup. Furthermore, when I retained only the first 100 volumes from the first run of the 3mm scanner setup, tedana no longer converged on that run (whereas it did converge with the full 195 volumes). I have not yet properly read up on how tedana algorithm works, but I wonder if the number of data points in the voxel time series is important for convergence? In other words, should one try to avoid short runs?

Many thanks,
Andres

Hi Andres,

That is unfortunate. Just to be clear, it fails to converge for the three shorter runs with the default seed (42), 50,000 iterations, no restarts, an EPI mask, and using the --tedpca kundu, correct?

How many components are being retained at the PCA stage for each of the runs/sequences?

Regarding run duration, it is entirely possible that shorter runs will have more problems with convergence. This could be driven in part by overly aggressive dimensionality reduction in the TEDPCA step (if you’re using --tedpca kundu or --tedpca kundu-stabilize). However, it could also be directly because of the shortness of the scans, although I don’t know much about the causes behind that. The same issue would impact other ICA tools (e.g., MELODIC), but I don’t know if there’s a solution.

Hi Taylor,

Sorry for my slow response.

For the 3mm scanner setup, I ran tedana inside an EPI mask and used the default --tedpca mle. It converged for all three runs. In the PCA stage it selected 194 components for all three runs which is the number of volumes in the run minus 1.

For the 2p4mm setup, I initially ran tedana with an EPI mask and used the default --tedpca mle. It did not converge for any of the three runs. In the PCA stage, it also selected “number of volumes minus 1” components for each run – which in this case was 146.

I have now run tedana master version on three runs of the 2p4mm setup, using an EPI mask and specifying --tedpca kundu-stabilize --maxit 50000 --maxrestart 100. For runs 2 and 3 tedana converged at the first attempt in less than 220 iterations, but for run 3 it has by now made 4 attempts with 50k iterations and not yet converged. In the PCA step it selected between 25-42 components. For one of the runs it also warned that no BOLD-like components were detected. It does seem that tedana is more likely to converge with kundu-stabilize. But maybe tedana would converge if we simply used a longer run.

I was also wondering if it is normal that with --tedpca mle, tedana selects “number of volumes minus 1” components at the PCA, or is that perhaps a too liberal starting point for ICA?

If it helps I can also upload the data and scripts.

Many thanks!
Andres

While MLE can hypothetically reduce the dimensionality of the data, in practice it doesn’t seem to do that very often. It’s hard to know how many components would be optimal for the subsequent ICA, though, which is why we have chosen an established data-driven approach (MLE) as the default.

25-42 does seem very low, so I’m not surprised that sometimes no BOLD-like components are detected. In that case, perhaps you need to use --tedpca kundu instead of kundu-stabilize, since it’s being too aggressive.

Yes, it does seem likely that increasing run length would also improve the likelihood of convergence. Having more echoes could also possibly help. These are things we (the tedana developers) are hoping to investigate more systematically, but in the meantime I guess increasing duration would be a good start. Hopefully more data would make MLE more reasonable. Even if the maximum number of components are selected at the PCA stage, ICA might be able to converge better with more data.

It also seems like the random seed matters more than the maximum number of iterations, in that, when ICA converges, it seems to do it rather quickly. It might be better for you to try running with a smaller maxit and a comparable maxrestart for that one run that’s still not converging.

Hi Taylor, thank you for your input. The random seem does seem to matter more in our data. For example, we collected 4 runs of new pilot data and tedana did not converge for any runs with --maxit 5000 and --maxrestart 2. However, when I changed it to --maxit 500 and --maxrestart 40, it eventually converged for 3 runs out of 4 in less than 400 iterations.

Many thanks,
Andres