What is the meaning of .fit() in NiftyMasker

I have BOLD data after fmriprep with the shape (65, 77, 49, 297).
Now, I want to take only vision-related area.
So, I downloaded a mask from neurosynth and ran the following:

vision_mask_fn = “primary_visual_association-test_z_FDR_0.01.nii.gz”
vision_img = nib.load(vision_mask_fn)
vision_img = image.math_img(“img>0”, img=vision_img)
fitted_vision_mask = NiftiMasker(mask_img=vision_img)
fitted_vision_mask.fit()
nii_file = f"{fmriprep_folder}/func/sub-{subj_id}task-{task_name}{subj_id}.nii.gzdesc-preproc_bold.nii.gz
clean_bold_nii = clean_img(nii_file, detrend=True, standardize=True, confounds=confounds_arr, t_r = 2 )
subj_fmri_data = fitted_vision_mask.transform(clean_bold_nii)
To get data with the shape of <, 1831> (1831 is the number of voxels in this mask).

  1. Is this the correct way to read the data and take only voxels from the mask?
  2. What is the meaning of running fitted_vision_mask.fit() ? What does it fit?

Thanks!

Hi, yes that is correct.
However note that instead of using clean_img you can perform these operations during masking by passing these parameters (detrend, standardize, etc. to the masker itself: fitted_vision_mask = NiftiMasker(mask_img=vision_img, detrend=True, standardize=True, etc.)
With the (default) parameters that you use here, NiftiMasker.fit does almost nothing, it just makes and stores a copy of the mask.
With different parameters it might do more work, e.g. if you had passed target_affine and target_shape it would resample the mask image to the required resolution, or if you had provided images to fit instead of a precomputed mask to __init__, fit would automatically compute a mask from the data.

1 Like

@jeromedockes Thanks! Is there an advantage to run it all at the masker as you wrote ( fitted_vision_mask = NiftiMasker(mask_img=vision_img, detrend=True, standardize=True, etc.)) and not to break it down as I wrote ?

It shouldn’t make a big difference.
However if you were spatially smoothing the images by passing smoothing_fwhm to the masker, the temporal operations would happen after smoothing in the masker.
Also by doing it in the masker, temporal operations such as detrending are only done on the voxels inside the mask so it may be faster and use less memory.
Also, the masker can cache these operations if you provide the memory_level and memory parameters, meaning that if you mask the same images again (eg by running the same script a second time) the computations will by avoided the second time.
Finally, it reduces a bit the complexity of the script by removing one step, and avoiding to deal with an additional image (which can also reduce memory usage).
So using clean_img as you did is fine, but I would recommend relying on the masker instead.

@jeromedockes Thanks, I will change.
Re smoothing_fwhm, how do I know if I should use it? When is it recommended?

Sorry but I don’t really know, it depends a lot on your data and the kind of analysis you intend to do.
I would recommend looking for some nilearn examples that are similar to the analysis you plan.
Some don’t use smoothing, others use smoothing with FWHM that vary between 2 and 10 mm, most of them between 4 and 6.
You can also look at the FWHM reported (usually in “methods” sections) in papers that are relevant for your project.
(When looking at nilearn examples that fit GLMs using FirstLevelModel or SecondLevelModel, the masker is not fitted explicitly but this happend under the hood.
In this case the FWHM is provided directly to the FirstLevelModel or SecondLevelModel)

Smoothing tends to be helpful to increase signal-to-noise in analyses where inferences are calculated on the order of small volumes, such as voxel-level, assuming no other spatial averaging was taking place. For example, if one was doing ROI-to-ROI connectivity, one would not normally spatially smooth, since the averaging of signals within the ROI serves a similar purpose. The size of the kernel will ultimately depend on region of interests. That is, if you are focusing on a small amygdala subdivision for example, you would want a small kernel. In general, as stated earlier, 4-6 mm are the most common kernel sizes.

1 Like

@Steven Thanks! and the smoothing_fwhm step is not done in fmriprep?

It is only done for AROMA outputs of fmriprep (6mm kernel)