I have a sample of 24 subjects, each with six resting-state fMRI sessions. However, some T1 scans are randomly missing across sessions (e.g., subject 01 has 2 T1 scans, while subject 02 has 5). When running pre-processing with fMRIPrep, should I:
Use the default fMRIPrep pipeline to generate averaged brain masks using all available T1 scans for each subject, even if the number of T1 scans used varies between subjects, or
Manually select a single T1 scan with the best quality for each subject and use it consistently across all sessions.
Which approach would be optimal for generating consistent brain masks across subjects?
I would use all T1s you have available for all subjects to get the brain mask individually for each subject. You can always create a single group-wise analysis mask later on.
Thank you for the quick response. Yes, the plan is to generate an individual mask for each subject. And I understand that fmriprep generates brain mask averaged across multiple sessions. In my dataset, each subject has six sessions of resting-state data, however, the number of T1 sessions varies across subjects. For example:
sub01: T1 available for four sessions
sub02: T1 available for two sessions
sub03: T1 available for six sessions
…
In this case, can I use the available T1s to generate the individual brain mask for each subject, despite the differences in the number of T1s? Should I account for these variations in the statistical analysis to prevent potential confounding?
What confounding would you be worried about? In your first level models you can define a common group wise mask based on the intersection of the subject-wise masks.