Grand mean scaling: still unclear about why

I am confused by grand mean scaling in fMRI analysis (i.e., linear model of task-based design) and hoping that someone can clarify and/or point me to a reference. Grand mean scaling is the practice of dividing the timeseries of all voxels within a run by a single number, where that single number is the average of those timeseries. The result is then often multiplied by a number like 100. The main reason for grand mean scaling that I am familiar with is that, since the raw signal varies from run to run, grand mean scaling avoids bias in comparisons of regression coefficients across runs by standardizing the units of those coefficients across runs. For example, if run two has a ‘generally higher’ signal than run one, grand mean scaling both of them will allow the regression coefficients from models fit to each run to be comparable.

However, isn’t it the case that this only makes sense when ‘generally higher’ means something very specific? It can’t just mean that the average signal is higher, because if that were the case then including a run-wise intercept would account for the differences. Instead, it seems like grand mean scaling carries the assumption that both the magnitude of the signal (e.g., the BOLD response to a stimulus) and the noise vary linearly with the mean of the signal.

So, I have two questions

  1. Is it correct that the validity of grand mean scaling rests on the assumption that both signal and noise are linearly related to the average BOLD signal?
  2. Is there a reference for 1?

Thanks

1 Like

Yes, the idea is that the scaling of the signal is arbitrary, eg may depend on scanner or sequence choice, hence there is an unknown multiplicative factor applied to both signal and noise.

You might want to take a look at
Chen, G., Taylor, P. A., & Cox, R. W. (2017). Is the statistic value all we should care about in neuroimaging? NeuroImage, 147 (October 2016), 952–959.

HTH

Bertrand