Hi!

I have 2 runs with a different number of scans in each run. I also have a slightly different set of confounds per run. I tried following this tutorial: Nilearn: Statistical Analysis for NeuroImaging in Python — Machine learning for NeuroImaging

The tutorial suggests creating one FirstLevelModel object, and then fitting it twice with different parameters for each run.

glm = FirstLevelModel(

t_r=t_r,

slice_time_ref = 0.5,

hrf_model=‘spm + derivative’,

drift_model=‘cosine’,

high_pass=0.01,

mask_img = fmri_mask,

noise_model=‘ar1’,

standardize=True,

minimize_memory=False)

glm_1 = glm.fit(fmri_img_1, events=events_1, confounds=confounds_1)

glm_2 = glm.fit(fmri_img_2, events=events_2, confounds=confounds_2)

design_matrix_1 = glm_1.design_matrices_[0]

design_matrix_2 = glm_2.design_matrices_[0]

In this case, both design matrix have the same shape, i.e. same number of time frames, although original fmri image had different number of time frames.

I tried creating a new FirstLevelModel object and fitting it for the second run. In this case, the shape of the design matrices corresponds to number of volumes in fmri image in each run.

Also, the results (e.g. t-maps, or significant activations from the second level) differ dramatically if I change only this one part: how many FirstLevelModel objects I create (one or two).

Is it possible that I misunderstood the tutorial?