Does anyone know of any way to account for reduced degrees of freedom in fMRI data at the run-level modeling stage, using any of the major fMRI analysis tools (e.g., AFNI, FSL, SPM, or nilearn)?
Specifically, I’m referring to workflows which involve denoising before modeling. Two specific cases I have in mind are multi-echo ICA-based denoising (as in
tedana) and Marchenko-Pastur PCA-based denoising (i.e.,
dwidenoise, as proposed in Adhikari et al., 2019). With most decomposition-based denoising approaches, such as
ICA-AROMA, one could include the rejected components in the GLM so that everything’s accounted for automatically, but with
dwidenoise the recommendation is to apply the denoising before any other preprocessing, and it uses a searchlight so that the number of components removed by the denoising procedure will vary across the brain. As such, the denoised data will have variable degrees of freedom, and there’s no way I can think of to directly account for that in the GLM.
Optimally, one would be able to supply a DOF map or DOF value to the tool, much like how FEAT allows users to provide voxelwise confounds. Is there anything like that out there?