How is stardardization done in nilearn.maskers.NiftiMasker?

Hi all, I want to compare the activation patterns in the ROIs across different conditions.

The idea is that I want to focus on the patterns themselves, excluding the effects of univariate results (i.e. the mean activation of a condition). In this case, I have to make sure at every TR the time-series are standardized across all the voxels in the ROIs, i.e. within a specific ROI, the averaged activation of each conditon are the same for every TR.

When using nilearn for MVPA, the tutorial doesn’t seem to tell me how the zscore step is implemented.
5.1. An introduction to decoding - Nilearn

I’d like to make sure how standardization works in nilearn.maskers.NiftiMasker. Is it standardizing the time-series across TRs of each voxel, instead of the way I mentioned above (across voxels of each TR)?

Appreciate your help :slight_smile:

Check out the documentation for the decoder (nilearn.decoding.Decoder - Nilearn):

standardize bool, default=True

If standardize is True, the data are centered and normed: their mean is put to 0 and their variance is put to 1 in the time dimension.

Meaning it is standardised in the time dimension, i.e. each time series is zscored.

I suppose if you want to do it differently, you can access the data using .get_fdata(), zscore the array in the way that you want, and then make a new image using new_img_like nilearn.image.new_img_like - Nilearn

1 Like