Summary report of XCP-D using multi-echo BOLD and TEDANA

Dear NeuroStars community,

I am analyzing some resting-state MB-ME data (MB factor 4, iPAT (GRAPPA) factor 2, TE= 12.2, 26.8, 41.4ms) using fMRIprep, tedana and XCP-D following this example. Specifically, I used a modified confound regressor file based on tedana’s ICA components with aCompCor.

However, looking at the summary report I was surprised to see a lot of residual global signal remaining, possibly related to deep breaths or to eye closing. I was hoping that aCompCor would be enough to remove these artifacts.

Does this look funny to you too?

Many thanks!

Can you share the full XCP-D command you used?

Certainly (I changed the path names to make everything shorter).

/usr/local/miniconda/bin/xcp_d \
    /Path/To/BIDS/derivatives/ \
    /Path/To/BIDS/derivatives/xcp_d_060_acompcor \
    participant \
    -w /Path/To/BIDS/derivatives/xcpd_work \
    --participant_label 001 \
    --nuisance-regressors acompcor \
    --custom_confounds /Path/To/BIDS/derivatives/custom_confounds_for_xcpd \
    --bids-filter-file /Path/To/Scripts/xcpd_bids_filter.json \

Interesintgly, if I exclude the custom confounds from tedana, the result looks much better:

Going back to tedana’s report, I noticed that the mean time series of the first accepted component actually includes the very large whole-brain variations we see in the preprocessed data:

The IC time series:

The whole-brain time series:

The position of this component on the rho/kappa space also makes it clear why this component was accepted:

However, I’m still puzzled: Since the effect of this component is evident throughout the brain, including in white-matter, I expected aCompCor to take care of this. Am I missing something fundamental here?

Many thanks for your help!

That definitely explains things. The components you flag as “signal” in the custom confounds file will be used to orthogonalize each of the “nuisance” regressors- not just from the custom confounds file, but also from your nuisance regression strategy. That means that the general pattern from your high-variance accepted component there will be removed from the aCompCor regressors. Would you be willing to try the following?

  1. Perform the orthogonalization just using the tedana components. Namely, orthogonalize the rejected components w.r.t. the accepted components. You can do this with numpy, or you can re-run tedana with the --tedort flag enabled.
  2. Next, create a custom confounds file just including the orthogonalized, rejected components. No need to include the accepted components with the signal__ prefix.
  3. Finally, feed in your new custom confounds file to XCP-D, as you ran it before.

Thanks, @tsalo! That did the trick. I had things backward in my mind regarding who is regressed out of whom.

The relevant carpet plots now look much better:

Would you say this is the recommended way to go in general, since tedana seems to sometimes err on the side of false positive components (see another example from a different dataset here)?
At least, as long as we believe that aCompCor components are noise and that motion related regressors do not represent BOLD signal, this sounds fine to me, but I would love to hear your opinion on this.

Many thanks!

For future reference, to complete step 2 in your explanation, I replaced the following lines in the xcpd-tedana example code:

# Prepend "signal__" to all accepted components' column names
accepted_columns = metrics_df.loc[metrics_df["classification"] != "rejected", "Component"]
mixing_matrix = mixing_matrix.rename(columns={c: f"signal__{c}" for c in accepted_columns})


# Instead of prepending "Signal__" to the accepted components, we omit them altogether, following the advice here:
accepted_columns = metrics_df.loc[metrics_df["classification"] != "rejected", "Component"]
mixing_matrix = mixing_matrix.drop(columns=accepted_columns, errors='ignore')

I hadn’t considered it before this thread, but yes it does seem like orthogonalizing before calling XCP-D is the way to go. The only alternative would be for tedana users to manually review and correct the component classifications, as tedana cannot flag BOLD-based noise signals (like global signal).

There are a few options in tedana for global signal control (--gscontrol gsr and --gscontrol mir), but I haven’t played with them much, so I don’t know what impact they would have on the final set of components.

Thanks @tsalo, Your advice was very helpful!
Just to make sure I’ve got this right:
This current pipeline only uses the ME-ICA outputs of tedana (to regress out the noise components). It does not incoporate the optimal combination of the different echos. To achieve this, I could either:

  1. Transform the optimally combined data to MNI152NLin2009cAsym space, and then run XCP-D, which will now also analyze this newly created file. or…
  2. Tranform each echo separately to MNI152NLin2009cAsym space, then run XCP-D, and finally optimally-combine the transformed+regressed echo files.

Since all steps are linear, I lean towards the simpler option 1.
I’d greatly appreciate it if you could point any flaws in this pipeline.

(the reason I’m asking is that I don’t see a great difference between single-echo acqusition with XCP-D and multi-echo acqusition with tedana+XCP-D in terms of within-network coherence (which I assumed should now be higher) and between-network connectivity (which I assumed should now be lower).

Many thanks!

I believe the recommended pipeline would be to use the optimally-combined output from fMRIPrep, which, if you requested standard-space outputs, should be in a standard space.

That way, you don’t have to transform anything.

I can’t speak to the within-network and between-network connectivity measures, sorry. I might maintain a resting-state functional connectivity workflow, but I typically use task data to validate denoising methods. Maybe @dowdlelt could weigh in on your findings? Assuming you’re not covering the same topic in Compare two protocols for TEDANA.

Thanks, @tsalo! I hadn’t noticed that the BOLD output if fmriprep is already optimally combined.

And thanks for noticing, that’s exactly what I’m covering in that other thread.