Running first-level analysis in nilean on a dataset with multiple runs and inconsistent number of cosine_XX parameters in each subject

Summary of what happened:

Hi Nilearn experts,

I am currently using nilearn to run first-level analysis on my dataset preprocessed by fmriprep. I included motion parameters and cosineXX parameters in the design matrix of each run. Due to my task design, volumes can be different across runs, for example, 343 in one run but 313 in another run from one subject. Consequently one run could have more cosineXX parameters than the other runs in one subject. I padded additional columns of 0s to make sure the number of columns and their order are consistent in each subject. But then, in the first level analysis, I got this warning message,

/Users/shengjie/anaconda3/lib/python3.11/site-packages/nilearn/glm/_utils.py:176: 
RuntimeWarning: divide by zero encountered in scalar divide cond = smax / smin 
/Users/shengjie/anaconda3/lib/python3.11/site-packages/nilearn/glm/_utils.py:180: 
UserWarning: Matrix is singular at working precision, regularizing... warn("Matrix is 
singular at working precision, regularizing...") .

I found in the final design matrix, those columns with 0s were regularized so their values are not 0s but replaced with some tiny values. I am quite new to nilearn, is this something that I should pay attention to? Thanks in advance!

Best,
Shengjie

Dear Shengjie,
The strategy we propose in Nilearn is to have full-rank matrices, which implies no matrices with columns of 0, but to adjust the contrasts accordingly (note that the contrasts can be specified symbolically, which makes them automatically adapted to the design matrix).
Since you provided a design matrix with a columns of zeros, Nilearn has to regularize it. At plotting time, the columns are normalized, which amplifies the regularized zero-column.
I advise you to rely on the above strategy to avoid numerical issues.
Best,
Bertrand Thirion

Dear Bthirion,

Thanks for your reply!

I got the idea of inserting a column of 0s from Steven‘s answer in this post (I also saw your post beneath saying you did not use this approach though :wink: )

I also tried running without padding a column of 0s. In one of my subjects, I have 21 regressors in one run, and 20 regressors in another 3 runs, which leads to an inconsistent number of columns in design matrices across 4 runs. This is how I generated a design matrix of each run:

def make_contrasts(design_matrix):
    contrast_matrix = np.eye(design_matrix.shape[1])
    contrasts = {
        column: contrast_matrix[i]
        for i, column in enumerate(design_matrix.columns)
    }

    contrasts["decision_short - decision_long"] = (
        contrasts["decision_short"] - contrasts["decision_long"]
    )
    
    contrasts["decision_long - decision_short"] = (
        contrasts["decision_long"] - contrasts["decision_short"]
    )
    
    contrasts["confirmation_short - confirmation_long"] = (
        contrasts["confirmation_short"] - contrasts["confirmation_long"]
    )
    
    contrasts["confirmation_long - confirmation_short"] = (
        contrasts["confirmation_long"] - contrasts["confirmation_short"]
    )
   
    return contrasts

Then when I fitted the model, I got this warning message:

/Users/shengjie/Library/CloudStorage/OneDrive-UGent/Collabration/DelayDiscounting_Jenya/code/GLM.py:122: UserWarning: One contrast given, assuming it for all 4 runs
  summary_statisticsNew = fmri_glm.compute_contrast(
/Users/shengjie/anaconda3/lib/python3.11/site-packages/nilearn/glm/contrasts.py:108: UserWarning: t contrasts should be of length P=22, but it has length 21. The rest of the contrast was padded with zeros.
  reg = regression_result[label_].Tcontrast(con_val)

I got the final results without crashing. Also I am thinking if I define contrasts symbolically, I might have the same message because of the inconsistence of design matrices’ column numbers. Can I ignore this warning and assume everything is correct here?

Thanks again for your help!

Best,
Shengjie

AFAICT, it worked well. Best,
Bertrand

Thanks for your check!