First level models with already denoised functional images

My functional images have already been denoised with afni prior to nilearn’s FirstLevelModel. However, I’m wondering now if I should pass my raw functional data as well as my confounds directly in nilearn.

Is there any advantage of running first level models with the raw functional data and add the confounds to FirstLevelModel vs using already denoised functional data and not add any confounds?

One very practical advantage of removing the confounds during GLM estimation: no need to save another copy of your images before running your GLM.

Also easier to try different de-noising strategies (different set of confounds) without (once again) having to save a different file for each.

if you are running your analysis from an fmriprep dataset, you can use nilearn.glm.first_level.first_level_from_bids - Nilearn
that allows you to easily use the different confounds loading strategies that are part of nilearn: nilearn.interfaces.fmriprep.load_confounds - Nilearn

Hi @mri

In addition to what @Remi-Gau said, it is better to only run a single GLM because you can account for potential colinearities between the task and motion. You also can better keep track of how many temporal degrees of freedom you are sacrificing.

Best,
Steven

1 Like

Hi @Remi-Gau and @Steven,

Many thanks, that is great advice!

@Steven, how can I check how many temporal DOF I’m sacrificing? I’m running nilearn’s FirstLevelModel, but couldn’t find the corresponding attribute that would give me this information.

Hi @mri, each predictor in your model is a tDOF lost. There are also “hidden” tDOF that are lost in preprocessing by steps like filtering, but that is a separate discussion.