Contrasts 2x3 ANOVA in nilearn, different conditions across runs

Hi everyone,

I am trying to run a 1st level analysis in nilearn. The experiment has 3 visual stimuli and 2 conditions. I have one design matrix per run, and each run has only one condition (like condition_1 for 1st and 2nd runs and condition_2 for 3rd and 4th runs). I need to:

  • specify an F-contrast that will look at differences between stimulus_1 vs. stimulus_2 vs. stimulus_3 in one single contrast (rather than in 3 different pairwise contrasts);
  • and specify a contrast comparing the 2 conditions (that, as said before, are in different runs).

However, I am having problems to specify those contrasts (and I’m new to fmri analysis so sorry if those questions are too basic). I checked some examples (like Simple example of two-runs fMRI model fitting and Single-subject data (two runs) in native space) but I am always getting errors when trying compute the contrasts (glm.first_level.FirstLevelModel.compute_contrast). Using 2D arrays always throw LinAlgError: Singular matrix and 1D contrasts crash when trying to broadcast the arrays (like ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (2,2) and requested shape (1,2)). Any clues on how to set those contrasts?

Thank you very much in advance!

Can you share the design matrix and contrast specification you use ? You could for instance plot them and share the plot ?
Thx for your collaboration,
Best,
Bertrand

I agree with @bthirion that giving us some code snippet and visualization will help us help you.

So you have 3 visual stim (a, b, c) and 2 conditions (1 and 2).
What you seem to have is :

  • run 1 has 1a, 1b, 1c
  • run 2 has 2a, 2b 2c

And one thing you want to do is to compare condition 1 and 2 that are in different runs.

Just note that comparing conditions across runs is not an optimal design from the fmri perspective. In general you want to keep the “things” you want to compare in the same run.

And for good design efficiency it is also better that trials you are trying to compare are too far apart from each other within a run.

DesignEfficiency - MRC CBU Imaging Wiki.

Hi @bthirion and @Remi-Gau, thanks for your answers.

Here is one example of my design matrices (this is for the 1st run):

And for the contrasts I used:

  • for main effect of stimulus
np.vstack(([0, 0, 0, 1, -0.5, -0.5], 
           [0, 0, 0, -0.5, 1, -0.5], 
           [0, 0, 0, -0.5, -0.5, 1]))
  • for main effect of condition (on stimulus processing)
np.vstack(([0, 0, 0, 1, 1, 1],
           [0, 0, 0, -1, -1, -1]))
  • interaction between stimulus and condition
np.vstack(([0, 0, 0, 1, -0.5, -0.5],
           [0, 0, 0, -0.5, 1, -0.5],
           [0, 0, 0, -0.5, -0.5, 1],
           [0, 0, 0, -1, 0.5, 0.5],
           [0, 0, 0, 0.5, -1, 0.5],
           [0, 0, 0, 0.5, 0.5, -1]))

Dear Elaine.pi,
your contrasts are all rank deficient, which is not a good thing.
You may see that by observing that the sum of the rows is always 0. You will probably have issu with contrast estimation.
Remowing the last row will likely solve the issue (or check that the contrast matrices are full rank using Numpy).
HTH,
Bertrand