Summary of what happened:
I’m running first-level and second-level models on the Nilearn languagelocalizer dataset. The code is based on Nilearn’s own tutorials: (BIDS dataset first and second level analysis - Nilearn) and (Intro to GLM Analysis: a single-session, single-subject fMRI dataset - Nilearn).
We get different results from the first-level models from nilearn version 0.9.1 and version 0.10.0, without changing anything in the code. The results are kind of similar, but not exactly (some voxels that survived the threshold before, does not survive the same threshold now). I was wondering what causes this? As far as we can tell from the arguments of the first-level model objects, everything appears identical. The code is identical and the data is identical. Have there been any changes to some default parameters in the first-level model objects (or the .fit() or .compute_contrast() functions from version 0.9.1 to 0.10.0, that could have caused this change in the results?