Resting-state de-noising with clean_img, signal.clean and NiftiMasker

Hi everyone,

I am new to nilearn (and to python in general). I have been trying out different Nilearn functions to de-noise 7T resting state data, that was preprocessed using fmriprep. I tried nilearn.image.clean_img, nilearn.signal.clean and nilearn.maskers.NiftiMasker but each time the resultant de-noised image looks weird.

I copied and pasted a brief example of the code I used for each function below, along with an example output image. I would really appreciate it if someone can tell me what I am doing wrong / what might be going wrong and how I can fix it.

Thank you in advance.

–

from nilearn import image as nimg
from nilearn.maskers import NiftiMasker
import numpy as np
import pandas as pd

mask_file='/path/to/the/mni_normalized/mask/subj_mask_normalized.nii.gz'
mask_img = nimg.load_img(mask_file)

func_file='/path/to/the/mni_normalized/func/subj_normalized_bold.nii.gz'
func_img = nimg.load_img(func_file)

confound_file='/path/to/the/confound/file/confounds.tsv'
confounds= pd.read_csv("confounds.tsv", delimiter ='\t')
confounds=confounds.values 

#(confounds include motion, WM, CSF, and derivatives for all from fmriprep, RETROICOR regressors derived using a toolbox)

image.clean_img

clean_img= nimg.clean_img(func_img, confounds=confounds, detrend=True, standardize=False, low_pass=0.2, high_pass=0.01, t_r=0.8, mask_img=mask_img)

clean_img.to_filename('clean_img_denoised_bold.nii.gz')

Output:

image

signal.clean


func_data=func_img.get_fdata()
func_data = np.reshape(func_data, (Number_of_Trs, -1)) #matching signal.shape[0] and confound.shape[0]

signal_cleaned=nilearn.signal.clean(func_data, runs=None, detrend=True, standardize=False, sample_mask=None, confounds=confounds, standardize_confounds=False, filter='butterworth', low_pass=0.2, high_pass=0.01, t_r=0.8, ensure_finite=False)

clean_data = np.reshape(signal_cleaned, func_img.shape)

clean_img= nb.Nifti1Image(clean_data, func_img.get_affine(),
                             func_img.get_header())

clean_img.to_filename('signal_clean_denoised_bold.nii.gz')

The output from this one looks more like a normal functional image, but the image has stripes on the it and it seems have a background added to it now (the input image here was skull-stripped with no background).

Niftimasker

#based on the whole brain time series extraction given in the example https://nilearn.github.io/stable/auto_examples/03_connectivity/plot_seed_to_voxel_correlation.html#sphx-glr-auto-examples-03-connectivity-plot-seed-to-voxel-correlation-py

brain_masker=NiftiMasker(mask_img=mask_img, runs=None, smoothing_fwhm=3.2, standardize=False, standardize_confounds=False, detrend=False, high_variance_confounds=False, low_pass=0.2, high_pass=0.01, t_r=0.8, memory_level=1, memory='nilearn cache', verbose=0, reports=True)

brain_time_series=brain_masker.fit_transform(func_img, confounds)

inversed=brain_masker.inverse_transform(brain_time_series)

inversed.to_filename('masker_denoised_bold.nii.gz')

Output image here looks a bit more like above, but better (still nothing like a functional image).

Regarding NiftiMasker

#When I initiated the fit_transform, I got this warning;


*Generation of a mask has been requested (y != None) while a mask has been provided at masker creation. Given mask will be used. warnings.warn('[%s.fit] Generation of a mask has been'*

#And this one below after it ran


*local/lib/python3.9/site-packages/nilearn/maskers/nifti_masker.py:570: UserWarning: Persisting input arguments took 31.77s to run.* If this happens often in your code, it can cause performance problems (results will be correct in all cases). 
*The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. data = self._cache(*
1 Like

Did you use the mask generated by fMRIprep?

Thank you bwinsto2 for the reply! and sorry for the delayed response, just had a chance to get my head around this again.

I tried it without the standardize and detrend and it still gives the same results. And Yes, I used the fMRIprep generated mask in this example.

I also tried generating the brain mask via the masker (using the whole-brain-template option) but that seems to give another error as below.

Not sure what is exactly going wrong here.

Masker

brain_masker2=NiftiMasker(mask_img=None, runs=None, smoothing_fwhm=3.2, standardize=False, standardize_confounds=False, detrend=False, high_variance_confounds=False, low_pass=0.2, high_pass=0.01, t_r=0.8, target_affine=func_img.affine, target_shape=func_img.shape[0:3], mask_strategy='whole-brain-template', mask_args={'threshold':0.5}, memory_level=1, memory='nilearn cache', verbose=0, reports=True)

brain_time_series=brain_masker2.fit_transform(func_img, confound_df)

Error

/usr/local/easybuild-2019/easybuild/software/compiler/gcccore/11.2.0/python/3.9.6/lib/python3.9/site-packages/joblib/memory.py:614: UserWarning: Cannot inspect object functools.partial(<function compute_brain_mask at 0x2b7442063d30>, mask_type='whole-brain'), ignore list will not work.

return hashing.hash(filter_args(self.func, self.ignore, args, kwargs),

/home/sevince/.local/lib/python3.9/site-packages/nilearn/maskers/nifti_masker.py:452: JobLibCollisionWarning: Cannot detect name collisions for function 'unknown'

self.mask_img_ = self._cache(compute_mask, ignore=['verbose'])(

/usr/local/easybuild-2019/easybuild/software/compiler/gcccore/11.2.0/python/3.9.6/lib/python3.9/site-packages/joblib/memory.py:792: UserWarning: Cannot inspect object functools.partial(<function compute_brain_mask at 0x2b7442063d30>, mask_type='whole-brain'), ignore list will not work.

argument_dict = filter_args(self.func, self.ignore,

/home/sevince/.local/lib/python3.9/site-packages/nilearn/maskers/nifti_masker.py:452: UserWarning: Persisting input arguments took 30.94s to run.

If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings).

THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem.

self.mask_img_ = self._cache(compute_mask, ignore=['verbose'])(

I just started using nilearn so definitely not an expert in this. But I have gotten good results. A few things you could try:

  1. use this function to load your confounds from fmriprep
  2. try running the most very basic settings in clean_img (for example). like turn everything off. and then turn them on one by one to see what is causing the issue. i (stupidly) thought my pipeline wasn’t working because i got a weird looking brain after having standardize set to True (this is expected behavior though since each voxel is standardized separately).
  3. try running this on publicly available data in the nilearn tutorials to see if this is an issue with your code or data. this will also help power users reproduce your issue.

Thank you for the suggestions.

  1. I am using confounds from fmriprep, in conjunction with physiological confounds derived using another toolbox so cannot use that function directly.

  2. Tried not standardizing for each function above (along with not detreding etc.) but still gives similar results.

  3. Will do when I got a chance, thank you. It may be because I am using 7-Tesla data so maybe the data is too big for those functions to handle (most probably not an issue but who knows).

I will try to do something more simple, fit the confounds to the data, then save the residuals + add the mean back to get the “cleaned data” and try resolving other steps (filtering, smoothing etc) separately.

Thanks again!