Confounds from fmriprep: which one would you use for GLM?

with leading NANs, wouldn’t that suggest it is using previous and current time point?

replacing NANs with the mean value per regressor sounds like a good plan. that is what the denoiser tool uses, so i’ll go with it and maybe try out that tool later. i think i’ll stick with SPM for now.

as always, thanks much.

That’s correct - FD is the movement between previous and current timepoint. Sorry for the confusion.

BTW there is a new tool in development that takes in outputs of FMRIPREP (or any other 4D nifti + TSV file) and performs various denoising strategies. It’s still work in progress, but you should check it out: https://github.com/arielletambini/denoiser

Is this tool still under development? I am trying to get it to work, but I run into problems when installing the carpet_plot function, the command:

pip install git+github.com/chrisfilo/nilearn.git@enh/carpet_plot

gives me this error:

Invalid requirement: ‘git+github.com/chrisfilo/nilearn.git@enh/carpet_plot
It looks like a path. File ‘git+github.com/chrisfilo/nilearn.git@enh/carpet_plot’ does not exist.

Is there a better/recommended way of doing detrending with nilearn.signal.clean and the confounds TSV files returned by fMRIprep? Does anyone have example code?

pip install -r requirements.txt should install all dependencies (including git+https://github.com/chrisfilo/nilearn.git@enh/carpet_plot; for some reason your command excluded https://)

Oops, I was copying from here: https://github.com/arielletambini/denoiser/issues/4

Are there any plans to include detrending, using this function or something else, in fMRIprep? Doing detrending in volume space seems preferable, and necessary if motion parameters are to be taken into account, which makes it off-limits for people interested in surface space output from fMRIprep.

Not at this time.

We should be providing enough transforms to allow you to perform detrending in the volume and then resample on the surface using mri_vol2surf. Is there a reason in principle this shouldn’t work, or is it more a question of how to perform the resampling, given the fMRIPrep derivatives?

No reason in principle. It also occurs to me now that detrending in volume space is probably not really necessary, although I would prefer it. I would still argue that including detrending (and scaling to percent signal change) in fMRIPrep would be a useful options for many users.

I don’t disagree that it would be useful, but the point of fMRIPrep has been to do “minimal” preprocessing. That is, focus on the stuff that people generally agree on, use the best tools available for each task, and then leave the myriad choices for further processing to downstream analysis tools.

We are working on a next-step preprocessing tool called FitLins (for Fitting Linear models). We don’t currently support detrending, but I think it would be a fairly reasonable place to put it, assuming you don’t mind it happening in the target space. I think a denoiser that is capable of working in the original space and then using fMRIPrep’s output transforms to sample to the desired analysis space would also be very useful.

Dear all,

Thank you for the information on this thread. I was wondering about some details:
I am trying to write a version of aCompCor in Matlab and was wondering about its inner workings. Just to clarify: when doing the aCompCor, assuming your data is in the form voxels (rows) x timepoints (columns), after running the PCA, what you would take as regressors are the coefficients of the first 6 components (5 x timepoints matrix)? Is that what the method does?

To extract the “noise mask” do you do this for each individual subject by coregistering the structural and functional images in native space, then segmenting and eroding the structural or do you use one set of anatomical masks for everyone in the MNI space? Would you think the two things are very different? The reason why I ask is that I have some subjects for whom I have preprocessed images in MNI space but not necessarily the T1 in native space.

Do you do any censoring/interpolation of volumes based on framewise displacement or you just include the displacement as a regressor?
Thank you very much,

Leonardo Tozzi

Is there a recommendation (or documentation) re. use of aCompCor / tCompCor when interested in resting-state data with specific interest in subcortical signal?

My current take with fMRIprep output is along the lines of this Python script, followed by readding the mean image to the denoised 4D image through FSL’s fslmaths.

import nibabel as nib
import nilearn.image as nii
import pandas as pd
from argparse import ArgumentParser

parser = ArgumentParser()
parser.add_argument('input')
parser.add_argument('output')
parser.add_argument('confounds')
parser.add_argument('t_r')

args = parser.parse_args()
input_img = nib.load(args.input)

confound_data = pd.read_csv(args.confounds, sep="\t")
confound_columns = ['a_comp_cor_00', 'a_comp_cor_01', 'a_comp_cor_02', 'a_comp_cor_03', 'a_comp_cor_04', 'a_comp_cor_05', 'cosine00', 'cosine01', 'cosine02', 'cosine03', 'cosine04', 'cosine05', 'trans_x', 'trans_y', 'trans_z', 'rot_x', 'rot_y', 'rot_z'] 
confound_matrix = confound_data.as_matrix(columns=confound_columns)
regressed_img = nii.clean_img(input_img, t_r=args.t_r, confounds=confound_matrix)

nib.save(regressed_img, args.output)
1 Like

you can look at this
https://xcpengine.readthedocs.io/

Hi Chris,

Thank you for providing such clear guidance with regards to FMRIPREP’s nuisance regressors. As a follow-up question, now that the more recent implementation of FMRIPREP produces many aCompCor variables (90 in my case), would you recommend including all aCompCor regressors in addition to the 6 motion parameters and FD at the run level?

Many thanks!
Monica

1 Like

Hi @Monica, you should select a number of aCompCor components that explain a pre-specified percentage of variance, please check the corresponding section in the individual report generated. fMRIPrep organizes components by the variance they explain.

How you set that threshold is mostly based on experience. This paper - https://doi.org/10.1016/j.neuroimage.2017.03.020 should give you some of the recommendations you need.

3 Likes

Hi @oesteban, thank you very much for your response. That’s very helpful and I’ll be sure to take a closer look at the paper you’ve recommended.

@karofinc might be able to give you even better information and recommendations to use their fMRIDenoise tool.

Hi @Monica! To learn more about various confound regressors and popular denoising strategies I recommend a paper by Parkes et al. (2018): https://www.sciencedirect.com/science/article/pii/S1053811917310972. The paper offers a nice review and comparison of various denoising strategies + some very useful recommendations.

There is no gold standard for the denoising and the performance of a particular denoising pipeline may depend on the data. As a response to this problem, we’re developing fMRIDenoise – a tool for automated denoising, denoising strategies comparison, and functional connectivity data quality control (https://github.com/nbraingroup/fmridenoise).

In the simplest scenario for denoising in fMRIDenoise, only the path to your data in BIDS format is needed to run the entire procedure. Data should be earlier preprocessed in fMRIPrep (version > 1.4.0). Running full procedure will denoise your data using the most popular denoising strategies and return FC quality measures, which may help you to select the best performing one. It’s still an early version of the tool, but should work well.

It is also a good practice to check whether your FC results are reproducible after using various denoising strategies.

Maybe you’ll find this useful for your application. We’ll be happy to hear your feedback. Please let me know if you need some further advice.

The software will be installable via pip on days.

6 Likes

8 posts were split to a new topic: AttributeError using fMRIDenoise on fMRIPrep outputs

Hi @karofinc,
Thank you very much for alerting me to fMRIDenoise! It sounds incredibly helpful and I will be sure to let you know how it runs once I’ve had a chance to try it out.

Many thanks,
Monica

1 Like