Multi-run surface GLM in nilearn

Hi all,

I know this has been discussed before and is currently an active line of Nilearn development, but due to time sensitivity I am looking to ask if anyone can let me know if my proposed work-around is valid.

I have 6 runs of preprocessed BOLD GIFTI files (fsnative). I was able to use nilearn.glm.first_level.first_level.run_glm to run linear models on each run, resulting in 6 contrast objects per condition of interest.

Here is some relevant code (some of the early data loading code are left out for brevity, there were no issues with loading the data):

contrast_objs = {} # Initialize empty data dictionary where contrast objects will go

##### THIS TAKES PLACE WITHIN A LARGER LOOP OVER THE 6 RUNS #####
## Run the GLM
Y = np.transpose(gii_data_std) # z-scored data array from one run
X = np.asarray(design_matrix) # design matrix made from BIDS events file
labels, estimates = run_glm(Y, X, verbose=1, n_jobs=-1) # perform GLM for the run 

## Define the contrast
contrast_matrix = np.eye(design_matrix.shape[1])
basic_contrasts = dict([(column, contrast_matrix[i])
                        for i, column in enumerate(design_matrix.columns)])

contrasts = {
    'facesGTother': (
        basic_contrasts['adult']/2
        + basic_contrasts['child']/2
        - basic_contrasts['body']/8
        - basic_contrasts['limb']/8
        - basic_contrasts['number']/8
        - basic_contrasts['word']/8
        - basic_contrasts['car']/8
        - basic_contrasts['instrument']/8
        - basic_contrasts['corridor']/8
        - basic_contrasts['house']/8
    ),
    'wordsGTother': (
        basic_contrasts['word']
        - basic_contrasts['body']/8
        - basic_contrasts['limb']/8
        - basic_contrasts['adult']/8
        - basic_contrasts['child']/8
        - basic_contrasts['car']/8
        - basic_contrasts['instrument']/8
        - basic_contrasts['corridor']/8
        - basic_contrasts['house']/8
    ),
}
  
## Compute the contrast
for index, (contrast_id, contrast_val) in enumerate(contrasts.items()):
    # Add a label to the output dictionary
    if contrast_id not in contrast_objs:
        contrast_objs[contrast_id] = []

    # compute contrast-related statistics
    contrast = compute_contrast(labels, estimates, contrast_val,
                                contrast_type='t')
    # add contrast to the output dictionary
    contrast_objs[contrast_id].append(contrast)

So now I have 6 items for each contrast in contrast_objs. The 6 individual run GLM output maps all look good when saved back out as GIFTIs.

I gather that I at the moment have to do fixed effect modeling separately of a convenient wrapper function. My main question is: can I simply add the contrasts together one-by-one and then recalculate session-wide statistical maps? It appears the __add__ method is used in calculating fixed effects models in volumetric data. For example, a quick-and-dirty way would be something like:

contrast_total = contrast1.__add__(contrast2)
contrast_total = contrast_total.__add__(contrast3)
contrast_total = contrast_total.__add__(contrast4)
contrast_total = contrast_total.__add__(contrast5)
contrast_total = contrast_total.__add__(contrast6)

betas = contrast_total.effect_size()
z_score = contrast_total.z_score()
t_value = contrast_total.stat()
p_value = contrast_total.p_value()

Is that solution valid in the meantime while something more formal is being developed internally in Nilearn?

Thanks,
Steven

1 Like

What you propose should be valid (assuming t tests only).
You know Nilearn well :wink:
LMK if you encounter any issue.
Best,
Bertrand

Thanks @bthirion!

Indeed Iā€™m only doing t-tests. The results looked good as I was making this post. I was more concerned whether variances would be propagated properly across contrasts since I am interested in z-scores. Good to know that this should work properly.

Best,
Steven