I might have misunderstood your question, sorry if this is the case.
Are you looking for the raw AR coefficients?
If so, I think you’d have to use lower level functions like _yule_walker as they are not stored in the model instance:
Note that the coeffs are binned and used to compute the labels (accessible through fmri_glm.labels_), as you can see here:
On a related note, fmri_glm.r_square[0].shape is (64, 64, 32, 1) , while I was expecting it to be (64, 64, 32) . Is that per design?
Yes, it is by design (although it is sometimes a source of confusion). For example, when you do transform and inverse_transform of a 3D image, you will get a 4D image with length one in the time dimension. You can have a look at this issue: should inverse_transform always return 4D output? · Issue #2726 · nilearn/nilearn · GitHub where you will see that people are divided on the question.
OK, I am indeed interested in the raw AR coefficients. It makes sense to me that the values are not stored in the model instance, as in 99/100 times I don’t even bother looking at them. Your solution of invoking _yule_walker() directly works fine, cheers.
Also thanks for the note on of the 3D vs 4D image. As you can tell I am only starting to get familiar with Nilearn. Thus far I have mostly used FSL and while the results I get from Nilearn are very similar, I am currently in process of understanding if they are systematically different somehow. AFAIK, FSL uses a 3D tukey taper for the AR estimation and I was wondering how much difference that makes compared to the Nilearn/MNE AR implementation.