Tedana doesn't find any BOLD components in the data

I have a set of multi-echo data which have been motion-corrected and slice timing corrected using fMRIprep. For some of the data, tedana fails at the spatial clustering of components, giving the following errors message. I’m wondering what I can do to fix this issue? Thank you.

INFO:tedana.decomposition.pca:Computing PCA of optimally combined multi-echo data
INFO:tedana.decomposition.ma_pca:Performing SVD on original OC data…
INFO:tedana.decomposition.ma_pca:SVD done on original OC data
INFO:tedana.decomposition.ma_pca:Estimating the subsampling depth for effective i.i.d samples…
INFO:tedana.decomposition.ma_pca:Generating subsampled i.i.d. OC data…
INFO:tedana.decomposition.ma_pca:Performing SVD on subsampled i.i.d. OC data…
INFO:tedana.decomposition.ma_pca:SVD done on subsampled i.i.d. OC data
INFO:tedana.decomposition.ma_pca:Effective number of i.i.d. samples 5272
INFO:tedana.decomposition.ma_pca:Perform eigen spectrum adjustment …
INFO:tedana.decomposition.ma_pca:Estimating the dimension …
INFO:tedana.decomposition.ma_pca:Estimated components is found out to be 45
INFO:tedana.metrics.kundu_fit:Fitting TE- and S0-dependent models to components
INFO:tedana.decomposition.pca:Selected 45 components with mdl dimensionality detection
INFO:tedana.decomposition.ica:ICA attempt 1 converged in 84 iterations
INFO:tedana.workflows.tedana:Making second component selection guess from ICA results
INFO:tedana.metrics.kundu_fit:Fitting TE- and S0-dependent models to components
INFO:tedana.metrics.kundu_fit:Performing spatial clustering of components
INFO:tedana.selection.tedica:Performing ICA component selection with Kundu decision tree v2.5
WARNING:tedana.selection.tedica:Too few BOLD-like components detected. Ignoring all remaining.
WARNING:tedana.workflows.tedana:No BOLD components detected! Please check data and results!
INFO:tedana.io:Writing optimally combined time series: C:\test\out\ts_OC.nii.gz
INFO:tedana.io:Variance explained by ICA decomposition: 97.34%
INFO:tedana.io:Writing low-Kappa time series: C:\test\out\lowk_ts_OC.nii.gz
INFO:tedana.io:Writing denoised time series: C:\test\out\dn_ts_OC.nii.gz
INFO:tedana.io:Writing full ICA coefficient feature set: C:\test\out\betas_OC.nii.gz
INFO:tedana.workflows.tedana:Making figures folder with static component maps and timecourse plots.

One relatively easy thing to do would be to change the random seed. The ICA components would then be different, and would maybe better dissociate BOLD from non-BOLD signals.

If you share the outputs (especially the report HTML file), it might help us (the tedana devs) figure out what may have led to no BOLD components being found. We’re definitely looking for ways to improve the decision tree.

Thank you for your suggestion. Would you mind letting me know how I can change the random seed? Should I use a different mask? I tried to run tedana without a mask and the result was the same.
I couldn’t upload the outputs here, but I uploaded the report.txt file and the figures in this github post:


You can specify the seed with the --seed option. I believe the default is 42. You could try giving it another value, e.g. --seed 2021

Tedana will create a mask if you don’t provide one. That’s why you’re seeing the same results. As @jbteves mentioned on the GitHub issue, looking at your data would be super helpful to find a solution :wink:

Thank e.urunuela and tsalo! Changing the random seed resolved the issue for almost all scans that had this issue. I noticed that change the seed value changes the number of accepted components even when tedana runs without an issue. I was wondering if there is a recommended range for the random seed value. Is there a cutoff value?

P.S: I shared the link to two sample data from my dataset (one data that changing the seed solved the issue and one that changing the seed didn’t solve the issue) in the github post.

It’s great to see changing the seed worked.

AFAIK there is no range or cutoff value for the seed.

Shall we continue the discussion on the GitHub issue? Just to make sure we don’t have the same discussion in two different threads.

1 Like