What is "effect_size" in nistats.first_level_model.FirstLevelModel.compute_contrast?


Apologies if I’ve missed this in the documentation.

What exactly is the output_type = "effect_size" in nistats.first_level_model.FirstLevelModel.compute_contrast saving?

Are these Cohen’s d’s? Beta’s? Something else?


From what I understand, it should be the summed unstandardized coefficients from the contrast. Equivalent to COPE values (if you’re used to FSL) or con_XXX (SPM).


So to get standardized parameter estimates would we also need to use the image resulting from ‘output_type = “effect_variance”’?


effect_size maps seem to look like FSL COPEs when multiplied with sqrt of effect_variance maps. This makes me think they are t-statistics but then there is also the ‘stat’ option for output_type so I’m still confused :anguished:


The effect_size values are definitely the raw parameter estimates. I ran a very simple model with no bells or whistles (i.e., no smoothing, no filtering, no rescaling) and compared against numpy.linalg.lstsq, and nistats’ effect_size values are the same as the lstsq estimates. Probably not the most efficient way to do it but it definitely clarified things for me.

To get the stat map for a t-contrast, you just do effect_size / sqrt(effect_variance).

Perhaps FSL and nistats have different default values for the rescaling or standardization of the data? Are you sure the contrasts you’re comparing are exactly the same as well?


@tsalo Could you put this in a repo or just in a github gist? This would be great to have documented somewhere.

RE: original question. I can’t offer much input, but I too am interested in using nistats (for the purpose of extracting beta coefficients for decoding). I had the same confusions when I was trying this out a few months back.


@danjgale Sorry, yes, I’ll try to get that up as a gist later today.

1 Like

as @tsalo pointed out effect_size is the sum of (raw) linear model coefficients. recently it was actually changed to the average accross runs. most likely you don’t want those unnormalized coefficients.

you may want to use the default, output_type="z_score", or if you want the test statistic before conversion to a Z score, output_type="stat"(and possibly specify the stat_type, either "t" or "F").

the available outputs are actually better documented as methods of Contrast objects:



Thank you all for the responses. I’ve been playing around with this a bit more and now think that the (filtered and smoothed) time series is also scaled before being entered into the GLM. Though this doesn’t seem to be the case for the design matrix (unless I’m messing something up in my model).

If that’s the case then to get standardized regression coefficients (betas) for a level 1 image we could multiply the effect_size image by the std of the regressor in the design matrix (not having to do anything for the std of y, the time series, since it is 1). Is this correct?


if you multiply the coefficients by the std of the regressor you will indeed get the coefficients you would have obtained by standardizing the design matrix first. still these “standardized” regression coefficients don’t have a particularly meaningful scale, using output_type=stat and stat_type = t instead will give you the coefficients scaled by their own standard deviation, and using output_type=z_score will convert this to a Z statistic