With the last XCP-D update I think the code provided here is not completely accurate and I wanted to go over the process of incorporating TEDANA rejected confounds as regressors in XCP-D.
I’m going to give an overview of the processing that I want to clarify if this is correct:
Preprocessing with fMRIPrep using the flag to output individual echo files;
Remove dummy volumes based on fMRIPrep automated detection for non_steady_state_vols from these echo files (also from the files in standard space and the confounds files);
Run TEDANA with the --tedort option to orthogonalize rejected components;
Use the file labelled as “…desc-ICAOrth_mixing.tsv" as confounds and mark the accepted components by prepending "signal_”; if there are any additional confounds (motion, GSR, etc) you wish to regress then you should add it here.
Run XCP-D using the previous matrix as a custom confound file to denoise the optimally combined timeseries (either the one from the tedana or the one from fmriprep as long as you remove the dummy volumes from the latter). Don’t forget to configure the YAML file.
While reading the documentation it’s not really clear to me if steps 4) and 5) are correct or are redundant.
You’re right, that example is outdated- I haven’t updated it since I changed how XCP-D handles non-fMRIPrep confounds in version 0.10.0. In the long run, I hope that tedana inputs to XCP-D will be generated by fMRIPost-tedana, so that they’re automatically organized as BIDS derivatives datasets. However, until fMRIPost-tedana is up and running, you will need to organize your tedana outputs into something resembling a BIDS dataset.
I haven’t actually done this yet, but here are the steps to follow:
Preprocessing with fMRIPrep using the flag to output individual echo files;
Remove dummy volumes based on fMRIPrep automated detection for non_steady_state_vols from these echo files (also from the files in standard space and the confounds files);
Run TEDANA with the --tedort option to orthogonalize rejected components;
Create a confounds TSV file containing only the noise components from desc-ICAOrth_mixing.tsv.
The file should be organized in a BIDS dataset with an appropriate BIDS name. You can just follow the naming and organization of your fMRIPrep derivatives if you want.
You do not need to prepend signal__, since the noise components are orthogonalized with respect to the signal components.
Add NaNs to the beginning of the confounds file to account for the dummy volumes. So if you have 3 dummy volumes flagged by fMRIPrep (or if you want to use an explicit number to override the fMRIPrep estimate), there should be 3 rows of NaNs in this file.
Create a tedana config file for XCP-D (the AROMA one is a good base for this).
Here’s a first attempt:
name: tedana
description: |
Nuisance regressors were selected according to the 'tedana' strategy.
ICA components flagged by tedana [CITATION] as non-BOLD noise
with the "me-ica" decision tree, mean white matter signal,
and mean cerebrospinal fluid signal were selected as nuisance regressors.
confounds:
preproc_confounds:
dataset: preprocessed
query:
space: null
cohort: null
res: null
den: null
desc: confounds
extension: .tsv
suffix: timeseries
# NOTE: I kept csf and white_matter here primarily as an example of
# how to use fMRIPrep confounds in conjunction with the tedana ones.
# Drop if you want.
columns:
- csf
- white_matter
tedana_confounds:
dataset: tedana
# NOTE: I wrote this as if you organized your tedana derivatives like fMRIPrep
# E.g., sub-01_ses-01_task-rest_desc-confounds_bold.tsv
# Change as you wish
query:
space: null
cohort: null
res: null
den: null
desc: confounds
extension: .tsv
suffix: timeseries
columns:
# Regular expressions begin with ``^`` and end with ``$``.
# NOTE: I wrote this as if column names were "tedana_orth_noise_[number]"
# Change based on how you choose to name them
- ^tedana_orth_noise_.*$
Call XCP-D with --datasets tedana=/path/to/tedana_dset (i.e., use the tedana label for that dataset, so it matches what’s in the config file). You’ll need to include a dataset_description.json in the tedana dataset too.