I hope you are all doing well. I’m facing some challenges while using Nilearn to establish a first-level model and I would greatly appreciate your assistance with the following issues:
In my experiment, I have two categorical variables (both with two levels: A1B1, A2B2, A1B2, A2B2) and two continuous variables (C, D). I would like to incorporate them into my design matrix. I’ve looked into parametric modulations as a potential solution, but I noticed that the function “nilearn.glm.first_level.make_first_level_design_matrix” only allows for one modulator column. I’m a bit confused about this. I attempted to create the design matrix following the code in Parametric Modulation in nistats, but I’m unsure whether when creating modulator2, I should modulate the original categorical variable column or the column already modulated by modulator1. In other words, should my design matrix columns be [“condition regressors, condition * modulator1 regressor, condition * modulator1 * modulator2 regressor”] or [“condition regressors, condition * modulator1 regressor, condition * modulator2 regressor”]? Additionally, I’m uncertain whether I should remove the condition regressors after introducing condition*modulator. Would deleting or keeping the condition regressors affect the subsequent computation of main effects and interactions of various variables?
My modulators are believed to only have an effect in certain conditions (e.g., only in A1B1 and A2B1 conditions). Would this impose any limitations when using parametric modulations?
If I want to treat one of the continuous variables as a covariate, can I simply set the column of condition*modulator to 0 in the contrast matrix, similar to how we treat motion parameters?
Can interactions occur between continuous and categorical variables (e.g., interaction between A and C)? How does this differ from how we usually design contrast matrices? Could you provide a simple example? This is crucial for me!
On a separate note, I’ve noticed in some literature that during first-level analysis, some unrelated regression factors are included (e.g., fixation points). What benefits does their inclusion bring? Does it help with deconvolution and enhance the accuracy of beta value estimation for the variables of interest?
I understand that my wording might be a bit convoluted. I’ll be happy to clarify any ambiguous points. I apologize for frequently reaching out on Neurostars recently. Thank you so much for the experts’ patient and thorough answers to my questions.