I’ve recently switched from using SPM and FSL via nipype to nistats and have been enjoying it a lot so far. However, I’ve run into a problem when trying to add parametric modulators to the model. There is no example in the nistats documentation and it is not quite clear where the parametric modulators can be added. Has someone solved this problem already?

The function make_first_level_design_matrix allows the events argument (a DataFrame) to have a “modulation” column. This will create a modulated version of the condition’s regressor (rather than separating the modulated regressor from the main effect regressor). If you want to do the latter, then I think the following approach would work best:

from nistats.design_matrix import make_first_level_design_matrix
import numpy as np
n_scans = 200
tr = 2.
frame_times = np.arange(n_scans) * tr
# assuming you have onset, duration, trial_type, and modulation columns
# mean-center modulation to orthogonalize w.r.t. main effect of condition
events['modulation'] = events['modulation'] / events['modulation'].mean()
# create design matrix with modulation
dm_pm = make_first_level_design_matrix(
frame_times,
events,
)
# remove modulation column
events = events[['onset', 'duration', 'trial_type']]
# create normal design matrix with modulation column added
# this assumes that you have one trial type (trial_type), so
# you'll need to edit the regs and names if not
dm = make_first_level_design_matrix(
frame_times,
events,
add_regs=dm_pm[['trial_type']],
add_reg_names=['modulator*trial_type'],
)

You may want to sort the columns in the final design matrix (dm) before running the first levels.

I’ve only tested this a bit, but I believe that the resulting design matrix would be what you’re looking for.

Hi!
I have a several questions regarding the parametric modulation.

In the mean-centering you suggest dividing by mean, but in other sources it’s suggested to subtract the mean (e.g. Parametric Modulation Bob Spunt, PhD). What would be the correct way?

You mention 2 types of possible parametric modulation:
-create a modulated version of the condition’s regressor (as far as I understood, this part of the code does exactly this dm_pm = make_first_level_design_matrix(frame_times,events))
-separate the modulated regressor from the main effect regressor (the rest of the code). As far as I understood this is how spm handles parametri modulation.
Can you please explain what’s the difference between these 2 approaches or maybe point me to the literature, because I didn’t find anything?
Can we simply include modulator as a column in the design matrix, so that we have e.g. 5 trial types and 1 modulator?

You’re right. It’s been a while since I wrote my initial response, so I’m not sure what I was thinking or if it was just a typo, but you should subtract the mean. Sorry about that!

If you just have one regressor- a modulated, un-centered one- then you’re essentially combining the intercept and slope of your line into one estimate. By separating them into two regressors (the original regressor and the mean-centered, modulated one), then the first regressor corresponds to the intercept (average BOLD amplitude) and the second one corresponds to the slope (relationship between modulator and BOLD amplitude). Jeanette Mumford’s video on parametric modulation explains this well IMHO.

It depends on your hypothesis. When you have five unmodulated, condition-specific regressors, you’re working under the assumption that the mean amplitude of the BOLD response will vary by condition. If you have only one modulated regressor, then you’re assuming that the relationship between the modulator and the BOLD response will be constant across regressors. The figure here shows a few similar design matrices. What you’re proposing is essentially the second model, while I would generally recommend the third model.

I’d recommend the standard approach (5 condition regressors, and 5 condition*modulator regressors).

One caveat- I don’t do a lot of parametric modulation these days, and someone else may be able to explain all of this much better than me.