Confounds from fmriprep: which one would you use for GLM?

Thanks but I was hoping you could recommend the software package that it is easiest to remove these confounds (FSL GLM, nilearn NiftiMasker, etc).

By “remove” you mean regress out then?

Yes. Sorry for not being clear.

fsl_glm will denoise and go from 4d to 4d nifti, niftimasker will call signals.clean which can do detrending. See: http://nilearn.github.io/modules/generated/nilearn.signal.clean.html

Nilearn does not cover the 4D-to-4D scenario in a single function; still it is possible to perform detrending + confound removal during masking, and then use the inverse_transform() method to re-generate a 4D image.
HTH.

3 Likes

from nilearn.image import clean_img is a function built to operate directly on 4D Nifti image (built around signal.clean). May be its worth to try with this function.

While if you are building pipelines, then NiftiMasker or MultiNiftiMasker can do that for you at transform level.

1 Like

You can also use SPM12s GLM and put the confounds.mat as multiple regressors file you’ve generated in MatLab from the confounds.tsv. file.

Another question related to the topic:
With never version I see a new, denoised (AROMA), 6mm smoothed EPI file. So when I use this file I don’t have to add ICA-AROMA regressors in the GLM as it already has been denoised ?

thanks a lot, mike

Hi Mike,

That’s correct. ICA-AROMA has two denoising strategies: aggressive and non-aggressive. Aggressive is the normal approach of detrending based on the regressors marked as “noise”. Non-aggressive fits all regressors, and then re-adds the components attributed to the “signal” regressors. This isn’t a standard GLM, so we save the ICA-AROMA output as a convenience.

For more details: https://fmriprep.readthedocs.io/en/latest/workflows.html#confounds-estimation

Chris

6 Likes

some confounds are NAN for the first time point. In utilizing the confounds for multiple nuisance regression, SPM applies singular-value decomposition, which can not deal with NAN values. i may have missed some updates, but this seems at odds with what @ChrisGorgolewski said in regards to these confounds (e.g. FD) being based upon the current and next time point. in my confounds.tsv files, there are some confounds with leading NANs, but none with trailing NANs.

would it be terrible to replace NAN values with 0, or will I have to exclude confounds with a leading NAN (e.g. FD) for use in SPM?

finally, this may be a good time to get updated opinions on which confounds to include in GLM analysis.

thanks, Nick

We switched from trailing to leading NaNs a while a go (it really does’t make much of difference). They are still based on current and the next time point.

Ideally I would replace them with mean value (calculated for each regressors separately), but I don’t think using 0 would yield much different results.

BTW there is a new tool in development that takes in outputs of FMRIPREP (or any other 4D nifti + TSV file) and performs various denoising strategies. It’s still work in progress, but you should check it out: https://github.com/arielletambini/denoiser

4 Likes

with leading NANs, wouldn’t that suggest it is using previous and current time point?

replacing NANs with the mean value per regressor sounds like a good plan. that is what the denoiser tool uses, so i’ll go with it and maybe try out that tool later. i think i’ll stick with SPM for now.

as always, thanks much.

That’s correct - FD is the movement between previous and current timepoint. Sorry for the confusion.

BTW there is a new tool in development that takes in outputs of FMRIPREP (or any other 4D nifti + TSV file) and performs various denoising strategies. It’s still work in progress, but you should check it out: https://github.com/arielletambini/denoiser

Is this tool still under development? I am trying to get it to work, but I run into problems when installing the carpet_plot function, the command:

pip install git+github.com/chrisfilo/nilearn.git@enh/carpet_plot

gives me this error:

Invalid requirement: ‘git+github.com/chrisfilo/nilearn.git@enh/carpet_plot
It looks like a path. File ‘git+github.com/chrisfilo/nilearn.git@enh/carpet_plot’ does not exist.

Is there a better/recommended way of doing detrending with nilearn.signal.clean and the confounds TSV files returned by fMRIprep? Does anyone have example code?

pip install -r requirements.txt should install all dependencies (including git+https://github.com/chrisfilo/nilearn.git@enh/carpet_plot; for some reason your command excluded https://)

Oops, I was copying from here: https://github.com/arielletambini/denoiser/issues/4

Are there any plans to include detrending, using this function or something else, in fMRIprep? Doing detrending in volume space seems preferable, and necessary if motion parameters are to be taken into account, which makes it off-limits for people interested in surface space output from fMRIprep.

Not at this time.

We should be providing enough transforms to allow you to perform detrending in the volume and then resample on the surface using mri_vol2surf. Is there a reason in principle this shouldn’t work, or is it more a question of how to perform the resampling, given the fMRIPrep derivatives?

No reason in principle. It also occurs to me now that detrending in volume space is probably not really necessary, although I would prefer it. I would still argue that including detrending (and scaling to percent signal change) in fMRIPrep would be a useful options for many users.

I don’t disagree that it would be useful, but the point of fMRIPrep has been to do “minimal” preprocessing. That is, focus on the stuff that people generally agree on, use the best tools available for each task, and then leave the myriad choices for further processing to downstream analysis tools.

We are working on a next-step preprocessing tool called FitLins (for Fitting Linear models). We don’t currently support detrending, but I think it would be a fairly reasonable place to put it, assuming you don’t mind it happening in the target space. I think a denoiser that is capable of working in the original space and then using fMRIPrep’s output transforms to sample to the desired analysis space would also be very useful.

Dear all,

Thank you for the information on this thread. I was wondering about some details:
I am trying to write a version of aCompCor in Matlab and was wondering about its inner workings. Just to clarify: when doing the aCompCor, assuming your data is in the form voxels (rows) x timepoints (columns), after running the PCA, what you would take as regressors are the coefficients of the first 6 components (5 x timepoints matrix)? Is that what the method does?

To extract the “noise mask” do you do this for each individual subject by coregistering the structural and functional images in native space, then segmenting and eroding the structural or do you use one set of anatomical masks for everyone in the MNI space? Would you think the two things are very different? The reason why I ask is that I have some subjects for whom I have preprocessed images in MNI space but not necessarily the T1 in native space.

Do you do any censoring/interpolation of volumes based on framewise displacement or you just include the displacement as a regressor?
Thank you very much,

Leonardo Tozzi