with leading NANs, wouldn’t that suggest it is using previous and current time point?
replacing NANs with the mean value per regressor sounds like a good plan. that is what the denoiser tool uses, so i’ll go with it and maybe try out that tool later. i think i’ll stick with SPM for now.
BTW there is a new tool in development that takes in outputs of FMRIPREP (or any other 4D nifti + TSV file) and performs various denoising strategies. It’s still work in progress, but you should check it out: https://github.com/arielletambini/denoiser
Is this tool still under development? I am trying to get it to work, but I run into problems when installing the carpet_plot function, the command:
Is there a better/recommended way of doing detrending with nilearn.signal.clean and the confounds TSV files returned by fMRIprep? Does anyone have example code?
pip install -r requirements.txt should install all dependencies (including git+https://github.com/chrisfilo/nilearn.git@enh/carpet_plot; for some reason your command excluded https://)
Are there any plans to include detrending, using this function or something else, in fMRIprep? Doing detrending in volume space seems preferable, and necessary if motion parameters are to be taken into account, which makes it off-limits for people interested in surface space output from fMRIprep.
We should be providing enough transforms to allow you to perform detrending in the volume and then resample on the surface using mri_vol2surf. Is there a reason in principle this shouldn’t work, or is it more a question of how to perform the resampling, given the fMRIPrep derivatives?
No reason in principle. It also occurs to me now that detrending in volume space is probably not really necessary, although I would prefer it. I would still argue that including detrending (and scaling to percent signal change) in fMRIPrep would be a useful options for many users.
I don’t disagree that it would be useful, but the point of fMRIPrep has been to do “minimal” preprocessing. That is, focus on the stuff that people generally agree on, use the best tools available for each task, and then leave the myriad choices for further processing to downstream analysis tools.
We are working on a next-step preprocessing tool called FitLins (for Fitting Linear models). We don’t currently support detrending, but I think it would be a fairly reasonable place to put it, assuming you don’t mind it happening in the target space. I think a denoiser that is capable of working in the original space and then using fMRIPrep’s output transforms to sample to the desired analysis space would also be very useful.
Thank you for the information on this thread. I was wondering about some details:
I am trying to write a version of aCompCor in Matlab and was wondering about its inner workings. Just to clarify: when doing the aCompCor, assuming your data is in the form voxels (rows) x timepoints (columns), after running the PCA, what you would take as regressors are the coefficients of the first 6 components (5 x timepoints matrix)? Is that what the method does?
To extract the “noise mask” do you do this for each individual subject by coregistering the structural and functional images in native space, then segmenting and eroding the structural or do you use one set of anatomical masks for everyone in the MNI space? Would you think the two things are very different? The reason why I ask is that I have some subjects for whom I have preprocessed images in MNI space but not necessarily the T1 in native space.
Do you do any censoring/interpolation of volumes based on framewise displacement or you just include the displacement as a regressor?
Thank you very much,
Is there a recommendation (or documentation) re. use of aCompCor / tCompCor when interested in resting-state data with specific interest in subcortical signal?
My current take with fMRIprep output is along the lines of this Python script, followed by readding the mean image to the denoised 4D image through FSL’s fslmaths.
Thank you for providing such clear guidance with regards to FMRIPREP’s nuisance regressors. As a follow-up question, now that the more recent implementation of FMRIPREP produces many aCompCor variables (90 in my case), would you recommend including all aCompCor regressors in addition to the 6 motion parameters and FD at the run level?
Hi @Monica, you should select a number of aCompCor components that explain a pre-specified percentage of variance, please check the corresponding section in the individual report generated. fMRIPrep organizes components by the variance they explain.
Hi @Monica! To learn more about various confound regressors and popular denoising strategies I recommend a paper by Parkes et al. (2018): https://www.sciencedirect.com/science/article/pii/S1053811917310972. The paper offers a nice review and comparison of various denoising strategies + some very useful recommendations.
There is no gold standard for the denoising and the performance of a particular denoising pipeline may depend on the data. As a response to this problem, we’re developing fMRIDenoise – a tool for automated denoising, denoising strategies comparison, and functional connectivity data quality control (https://github.com/nbraingroup/fmridenoise).
In the simplest scenario for denoising in fMRIDenoise, only the path to your data in BIDS format is needed to run the entire procedure. Data should be earlier preprocessed in fMRIPrep (version > 1.4.0). Running full procedure will denoise your data using the most popular denoising strategies and return FC quality measures, which may help you to select the best performing one. It’s still an early version of the tool, but should work well.
It is also a good practice to check whether your FC results are reproducible after using various denoising strategies.
Maybe you’ll find this useful for your application. We’ll be happy to hear your feedback. Please let me know if you need some further advice.
Hi @karofinc,
Thank you very much for alerting me to fMRIDenoise! It sounds incredibly helpful and I will be sure to let you know how it runs once I’ve had a chance to try it out.