Creating contrasts when using time and dispersion derivatives

Hi All,

I am using Nilearn FirstLevelModel to model my fMRI data. I have used a HRF model with the derivatives and I have a question about creating contrast images that take the derivatives into consideration.
Is there a reason I should use the corresponding betas to calculate an F-contrast instead of using the betas to get the estimated/fitted HRF, then extracting the peak amplitude of the HRF and then using the peak amplitude to create t-contrasts? I haven’t seen the latter approach be an option anywhere I am really curious to know why that is. Perhaps it’s not a very good approach, too complicated, or something else?

Thank you!

Best,
Jenni

Hi,
My main recommendation would be to consider the main term only. Indeed, unless there is a good reason to believe that delays are large (>2s), the benefit is not worth the effort imho.
A possible way forward is test the set of derivatives (with an F test) to assess how much signal gets there. Ideally, it should not be very significant.
If you observe a big effect, you may want to shift temporally your regressors.
Finally, you can do an F test including main effect and derivatives, but I think that this is hard to interpret and I would not advise to do it.
Best,
Bertrand

Thank you so much for your recommendations!

I have reason to believe that there may be delays in the response, but it may vary from participant to participant and potentially from one brain region to another.

However, I agree F-contrasts aren’t very easy to interpret. This is why I thought it might be a good idea to use all three betas to get the estimated HRF shape for each voxel and then extract the peak amplitude (whether positive or negative). I thought I could then use the HRF peak amplitude to calculate t-contrasts, which I think would be easier to interpret, whilst still using all the information from the three betas. However, I haven’t seen this approach used anywhere, which made me wonder if it’s a bad approach.

Best,
Jenni

AFAIK this is very hard to do with most current software.
You may want to consider GLMSingle (GitHub - cvnlab/GLMsingle: A toolbox for accurate single-trial estimates in fMRI time-series data)
Best,
Bertrand

You could also do np.sign(canonical) * np.sqrt((canonical ** 2) + (time_deriv ** 2) + (disp_deriv ** 2)), based on Calhoun et al. (2004), though I’m betting @bthirion’s recommendation of GLMSingle would give better results.

Thank you both so much for the suggestions!