My functional images have already been denoised with afni prior to nilearn’s FirstLevelModel. However, I’m wondering now if I should pass my raw functional data as well as my confounds directly in nilearn.
Is there any advantage of running first level models with the raw functional data and add the confounds to FirstLevelModel vs using already denoised functional data and not add any confounds?
In addition to what @Remi-Gau said, it is better to only run a single GLM because you can account for potential colinearities between the task and motion. You also can better keep track of how many temporal degrees of freedom you are sacrificing.
@Steven, how can I check how many temporal DOF I’m sacrificing? I’m running nilearn’s FirstLevelModel, but couldn’t find the corresponding attribute that would give me this information.
Hi @mri, each predictor in your model is a tDOF lost. There are also “hidden” tDOF that are lost in preprocessing by steps like filtering, but that is a separate discussion.