Nilearn provides a nice example of how to do seed-based functional connectivity analysis. To do this they extract the mean time course from the seed ROI, and then calculate voxelwise correlations for the entire brain.

How is this different/superior to extracting the mean seed ROI signal and running a GLM, and obtaining voxelwise betas (and from them t-statistics)?

The formalism argument indeed makes sense. I don’t however understand why the t-statistic would be testing a less interesting hypothesis if it is actually equivalent to Pearson’s r.

Could you explain that a bit more?

People tend to interpret t statistics as inferential statistics (evidence against the null) – they implicitly or explicitly convert it to a p-value. There is some nuance between stating ‘the correlation between PCC and hippocampus is .28’ and ‘the correlation between PCC and hippocampus is z=3.18’

I have 2 comments about that

- quite often, the p-value associated with the z or t score does not hold because there are unfulfilled assumptions (correlated noise etc)
- absolute connectivity values should be interpreted with caution. Some arbitrary preprocessing choices (global signal regression etc.) can alter them quite dramatically. In my view, what matters is how this varies across conditions (individuals, groups, etc.), assuming that this does not fluctuate too much owing to preprocessing choices --this has to be checked of course.

HTH.

1 Like