Obtain ar1 estimates from nilearn.glm.first_level.FirstLevelModel()?

Sorry for the simple question, but how do I obtain the AR1 estimates after fitting a FirstLevelModel in nilearn?

After running

%load https://raw.githubusercontent.com/nilearn/nilearn/main/examples/04_glm_first_level/plot_spm_multimodal_faces.py

and

fmri_glm = FirstLevelModel(minimize_memory=False, noise_model='ar1', verbose=1)
fmri_glm = fmri_glm.fit(fmri_img, design_matrices=design_matrices)

I was expecting to find the AR1 estimates in fmri_glm. after fitting. What am I missing?

On a related note, fmri_glm.r_square[0].shape is (64, 64, 32, 1), while I was expecting it to be (64, 64, 32). Is that per design?

Appreciate your help,
Matthias

Hi @mekman

I might have misunderstood your question, sorry if this is the case.
Are you looking for the raw AR coefficients?
If so, I think you’d have to use lower level functions like _yule_walker as they are not stored in the model instance:

Note that the coeffs are binned and used to compute the labels (accessible through fmri_glm.labels_), as you can see here:

On a related note, fmri_glm.r_square[0].shape is (64, 64, 32, 1) , while I was expecting it to be (64, 64, 32) . Is that per design?

Yes, it is by design (although it is sometimes a source of confusion). For example, when you do transform and inverse_transform of a 3D image, you will get a 4D image with length one in the time dimension. You can have a look at this issue: should inverse_transform always return 4D output? · Issue #2726 · nilearn/nilearn · GitHub where you will see that people are divided on the question.

Hope this helps!
Nicolas

Hi @NicolasGensollen. Thank you very much for your detailed reply.

OK, I am indeed interested in the raw AR coefficients. It makes sense to me that the values are not stored in the model instance, as in 99/100 times I don’t even bother looking at them. Your solution of invoking _yule_walker() directly works fine, cheers.

Also thanks for the note on of the 3D vs 4D image. As you can tell I am only starting to get familiar with Nilearn. Thus far I have mostly used FSL and while the results I get from Nilearn are very similar, I am currently in process of understanding if they are systematically different somehow. AFAIK, FSL uses a 3D tukey taper for the AR estimation and I was wondering how much difference that makes compared to the Nilearn/MNE AR implementation.

Thanks again for your help,
Matthias