Confounds from fmriprep: which one would you use for GLM?

fmriprep

#21

with leading NANs, wouldn’t that suggest it is using previous and current time point?

replacing NANs with the mean value per regressor sounds like a good plan. that is what the denoiser tool uses, so i’ll go with it and maybe try out that tool later. i think i’ll stick with SPM for now.

as always, thanks much.


#22

That’s correct - FD is the movement between previous and current timepoint. Sorry for the confusion.


#23

BTW there is a new tool in development that takes in outputs of FMRIPREP (or any other 4D nifti + TSV file) and performs various denoising strategies. It’s still work in progress, but you should check it out: https://github.com/arielletambini/denoiser

Is this tool still under development? I am trying to get it to work, but I run into problems when installing the carpet_plot function, the command:

pip install git+github.com/chrisfilo/nilearn.git@enh/carpet_plot

gives me this error:

Invalid requirement: ‘git+github.com/chrisfilo/nilearn.git@enh/carpet_plot
It looks like a path. File ‘git+github.com/chrisfilo/nilearn.git@enh/carpet_plot’ does not exist.

Is there a better/recommended way of doing detrending with nilearn.signal.clean and the confounds TSV files returned by fMRIprep? Does anyone have example code?


#24

pip install -r requirements.txt should install all dependencies (including git+https://github.com/chrisfilo/nilearn.git@enh/carpet_plot; for some reason your command excluded https://)


#25

Oops, I was copying from here: https://github.com/arielletambini/denoiser/issues/4


#26

Are there any plans to include detrending, using this function or something else, in fMRIprep? Doing detrending in volume space seems preferable, and necessary if motion parameters are to be taken into account, which makes it off-limits for people interested in surface space output from fMRIprep.


#27

Not at this time.

We should be providing enough transforms to allow you to perform detrending in the volume and then resample on the surface using mri_vol2surf. Is there a reason in principle this shouldn’t work, or is it more a question of how to perform the resampling, given the fMRIPrep derivatives?


#28

No reason in principle. It also occurs to me now that detrending in volume space is probably not really necessary, although I would prefer it. I would still argue that including detrending (and scaling to percent signal change) in fMRIPrep would be a useful options for many users.


#29

I don’t disagree that it would be useful, but the point of fMRIPrep has been to do “minimal” preprocessing. That is, focus on the stuff that people generally agree on, use the best tools available for each task, and then leave the myriad choices for further processing to downstream analysis tools.

We are working on a next-step preprocessing tool called FitLins (for Fitting Linear models). We don’t currently support detrending, but I think it would be a fairly reasonable place to put it, assuming you don’t mind it happening in the target space. I think a denoiser that is capable of working in the original space and then using fMRIPrep’s output transforms to sample to the desired analysis space would also be very useful.


#30

Dear all,

Thank you for the information on this thread. I was wondering about some details:
I am trying to write a version of aCompCor in Matlab and was wondering about its inner workings. Just to clarify: when doing the aCompCor, assuming your data is in the form voxels (rows) x timepoints (columns), after running the PCA, what you would take as regressors are the coefficients of the first 6 components (5 x timepoints matrix)? Is that what the method does?

To extract the “noise mask” do you do this for each individual subject by coregistering the structural and functional images in native space, then segmenting and eroding the structural or do you use one set of anatomical masks for everyone in the MNI space? Would you think the two things are very different? The reason why I ask is that I have some subjects for whom I have preprocessed images in MNI space but not necessarily the T1 in native space.

Do you do any censoring/interpolation of volumes based on framewise displacement or you just include the displacement as a regressor?
Thank you very much,

Leonardo Tozzi