Tedana : "No BOLD components detected! Please check data and results!" How to check my data?

Hello everyone,

I am sorry if it is a topic that has been already discussed on this website, but I am stuck and have tried the main solutions you proposed.

So we are doing several scans of several minutes in a 3T scan, in MBME (multiband, multiechos). The experiment is a Breathhold calibration (as done in this article ).

We have four echos 9.1, 25, 39.6 and 54.3 ms
If I use the raw data of the four echos, everythings works well, but after the preprocessing (Unifying, realign, despike and coregistration), I get the following warnings

warnings and errors I get :
divide by zero encountered in true_divide
F_T2 = (alpha - SSE_T2) * (j_echo - 1) / (SSE_T2)
divide by zero encountered in true_divide
F_S0 = (alpha - SSE_S0) * (j_echo - 1) / (SSE_S0)

and
No BOLD components found. Re-attempting ICA.
after maxrestarts, I get the message in the title (No BOLD components detected! Please check data and results!)

I have tried many different seeds, none seem to work, I can run on all possible seeds, but I would like to find a less bruteforce method ^^.

How can I verify that my data arenā€™t just noise, or that the preprocessing is flawed ?
figures outputted by tedana can be found here : tedana results ā€“ GoogleĀ Drive

Thanks for any help you can give me :slight_smile:

Hi ztn,
A few comments, then questions.

First thing I see is that you have 35 TRs, is that correct? That would seem to disagree with your ā€œseveral minutesā€ so I am concerned that something went awry. From the figures:

If that is not a mistake, then that is an extraordinarily small number of volumes and may be insufficient for the PCA/ICA type approach. Its not guaranteed to fail, but it is about 3 to 4x shorter than I would typically use. (there is no rule, sorry - but I try and aim for at least 100 vols, others likely do differently)

In that article, I see runs with 355 volumes employed.

No need to worry about those divide by zero errors, they are not the cause here.

Iā€™m generally concerned with the data that I see anyway - for example, this component (16):
image

Has a strange ring in it, as do others which makes me think that maybe fat sat was off (or too much acceleration)? Without seeing the data, Iā€™m purely guessing, but it doesnā€™t quite look right to me. But hey, maybe tedana will be able to pull something like that out!

Questions:

You mention unifying - what do you mean?

How was this processing performed? Those steps are in an odd order (despiking tends to occur before realignment in a typical AFNI pipeline, for example. Did you perform slice timing correction?

Hello, thank you for your response.

This dataset is a test on the flip_angle, to measure the tSNR. So we ran a short number of TR, while doing a breathholding task. It might be short yes, but it wasnā€™t the most important run, iā€™ve put a link to a bigger run of breathholding below.
Fo rinfo, TR is 4.115 s
I tried again on the bigger dataset, got warnings but after 6 trials, it got the BOLD components. It is so random !

Concerning the steps of preprocessing, Unifying is a pre-step before skullstripping, thatā€™s what collegues told me : 3dUnifize.

The pipeline is like so :

Anat_
3DUnifize
3Dskullstrip
antsRegistration (to MNI)
antsApplyTransform

Echos_
Unifize
Mcflirt (realignment)
3dSkullStrip to creat mask (doesnā€™t handle the 4th dimenssion)
3dCalc to apply mask
3dDespike
tedana
align_epi_anat.py -epi2anat on ECHO1
3dAllineate on all echos
antsApplyTransforms on all echos with the transforms from anat

I donā€™t use the normalised datasets as they are too big for tedana :).

here is the second dataset.

So it seems to work sometimes, and only on big datasets. Thatā€™s a bummer, it means my small datasets are too small ?
Have you got any advice on the preproces pipeline ?
Thanks,
ztn

Thanks for clarifying, things make a bit more sense but I unfortunately I have more concerns. Lets take it from the top though.

For your short run - denoising is likely not appropriate. You can get a tSNR from the ā€˜optimal combinationā€™ (Using tedana from the command line ā€” tedana 0.0.12+0.g863304b.dirty documentation) without denoising. Iā€™m not completely sure what your goal is when you say a ā€œtest on the flip_angleā€ but if you are varying the flip angle, getting 35 volumes and then repeating that process - then you would want to see tSNR without denoising anyway. That way it is a fair comparison.

For tedana results - Iā€™m very worried about your preprocessing. If 3dunifize is being used on each echo independently that is probably wrong - it is altering the very important echo-specific information. If you are just applying it to a dataset to create a mask, then not using the unifized data, that would be ok.

Even then, it is also not clear why you are applying a brain mask as an early step. That seems like it is unnecessary.

Are you estimating the parameters from one echo with MCFLIRT them applying that to all of them? Motion correction should not be ran on each echo independently.

Despike should probably be a first step, rather than the last step before tedana.

With a 4 seconds TR, you should be using slice timing correction.

Your normalized data being too big for tedana is likely because when you apply the ants transforms you are writing out super small voxels, which is just making your datasets huge with no purpose (upsampling during processing can be useful in very high resolution scenarios, but not here).

For the second dataset I am even more concerned. You have enough timepoints, but I see very little meaningful structure there, other than the artifact previously noted. Given all of the potential issues above, it is may not be very useful to look at the tedana output.

The only thing I can say there is that it looks like the MRI sequence is potentially not good. You have a very long TR (despite using multiband??), but also strange artifacts that show up in many of the components. The brain mask that you have used looks like it may also not be very good - but again, it is kind of hard to tell from just the components.

What are your MR sequence parameters (GRAPPA, multiband, voxel size, field of view, bandwidth, etc etc)? It looks almost like you are using multiband 4, but to do that and still have a 4s TR ā€¦well, that is extraordinarily confusing. What does each echo look like? How many channels does your head coil have?

tedana will work on datasets of varying size, but 35 volumes could be too short for denoising. The small datasets may be too small, but they may also just be bad data.

I would recommend using afni_proc.py AFNI program: afni_proc.py - consider example 12 and its various approaches. It looks like you already have afni installed. You could also use fmriprep but I donā€™t know much about that.

Assuming that the MRI data itself can be salvagedā€¦I would suggest processing the data in a very very simple way first, like the following.

Slicetime correct each echo
Motion correct the first echo and apply those parameters to echoes 1,2,3,4
tedana

You can then take 3dAllineate, ANTS etc - those steps seem fine except for the possibility that you are taking a 3x3x3mm dataset to 1x1x1. I believe there is an option for a reference volume when applying ants transforms.

I would take the MNI template and resample it to your fMRI voxel size (3dresample -dxyz 3 3 3 -prefix resampled_MNI_template.nii.gz -input Your_MNI_template_image.nii.gz ) and use that as a reference image, assuming your voxels are indeed 3x3x3 in the example provided.

1 Like

Hello,
The TR of 4.115 s is because we run a ASL-BOLD sequence. This is why we have such a long TR. I think this is where the artefacts come from as well (and why we are experimenting on the flip angle).
Basically, we are reproducing a calibrated fMRI sequence from article that enables us to extract ASL & BOLD signal within one experiment.
Sorry for not mentionning clearly in my first message.
Iā€™ll put some experiment parametres at bottom of this message.

Thank you very much for your advices, it makes perfect sens indeed for mcflirt and tSNR calculations. I hope it will bring better results.

Regarding the voxel sizes, they are indeed 3x3x3, and are converted to 1x1x1 through normalisation. maybe I could to -anat2epi rather than -epi2anat which should morph the anat and MNI images towards the fMRI format.

I was considering using the standart afni_proc or fmriprep to do all this, but I also wanted to fully understand everything that was going on. Thank you so much for your clear advices.

Experiment parametres :
MB-factor = 4
FOV = 240 mm
resolution = 3 Ɨ 3 Ɨ 3.
bandwidth (BW) = 62.5 kHz.
Basicaly, we are using the same procedure as in the article.

Thanks again

Everything begins to make much more sense now! Ok, so it is more like other work from Cohen et al here that you are following, not the version with the shorter TR you linked.

For your MNI conversion, you should absolutely be using a reference image if you use ANTS: --reference-image. Outside of some very specific methods (for example layer fMRI ) there is absolutely no reason to go to 1x1x1mm. Especially from 3x3x3. It dramatically increases the data storage requirements and processing time by an enormous amount for no effective gain.

That said, you should be able to use a minimal processing pipeline (slice timing correction, motion correction) on the longer runs and see how tedana does. You can certainly try with the shorter runs, but I still think those may be too short.

The most important thing is that you do not alter the values within each echo time by scaling or unifying things. Interpolation (from slice timing or motion correction) is fine but should be identical between echoes. For example, if you use slice timing, use it on all echoes (each echo can use the same slice offset) or with motion correction, estimate it from one echo (i prefer 1st, others use later ones) and apply those exact parameters to the other echoes. See details here, in the tedana docs

1 Like

Thank you for your clear indications Logan.
It is much clearer now, Iā€™ll use all your advices and hopefully will get better results.
I think most of the problems come from the processing as the sequence has been used before with good results.

Thanks again

good luck! hopefully you end up with some nice images (and good data, of course!)

1 Like