Aligning events after sampl_mask in first-level GLM

Hi everyone,

I am currently running a first-level GLM analysis of task-based fMRI data. The data were preprocessed with fMRIPrep, where I flagged three dummy scans. When building the first-level GLM in Nilearn, I load the confounds using load_confounds(), and I pass the resulting sample_mask to first_level_model.fit().

I am unsure how the events should be aligned in this case. Specifically, should the event onsets be aligned

(1) with the start of the actual task (i.e., after the dummy scans), or
(2) with the start of the acquisition including the dummy scans (similar to how the confounds are indexed)?

Here is my code:

 confounds_simple, sample_mask = load_confounds(
                                                fmri_img,
                                                strategy=["motion"],
                                                motion="basic",
                                                )


 first_level_model = FirstLevelModel(t_r=tr,
                                     smoothing_fwhm=6.0,
                                     mask_img=mask,
                                     )

first_level_model = first_level_model.fit(fmri_img, 
                                          events=events, 
                                          confounds=confounds_simple, 
                                          sample_masks=sample_mask,
                                          )

I would appreciate any clarification on the correct alignment strategy.

Best regards,
Annkathrin

IIUC, events time should be indexed as the confounds.
If your TR is 2s, and if you have 3 dummy scans at the beginning, then you have 3 corresponding confound values, and the events time should include the 6 seconds corresponding to the dummy scans.
In any case, you want to check visually that the design matrix meets your expectations.
HTH,
Bertrand

Hi,

Thanks for your quick response! Yes, I tried to visually inspect the design matrix. But as it still appears to start at volume 0 even after applying the sample_mask (and thus excluding the dummy scans), it was difficult for me to determine what the correct interpretation should be.

If I understand correctly, the GLM is first constructed with volume 0 corresponding to the first dummy scan, and then the sample_mask effectively filters the GLM by discarding the first three volumes. Is that correct?

Best,
Annkathrin

Oh I see, the use of the sample_mask may make the timing information inconsistent.
Can you clarify what function, class or method you are referring to here ? That would help me narrow to given you a more practical advice.
Best,
Bertrand

Yes, sure. I am running a first level model for a task-based analysis, which I am fitting using events, confounds, and sample mask. The sample mask only excludes the first three dummy scans. I am just unsere how to align the events.