I found in several papers that when it comes to computing homogeneity of brain functional connectivity (i.e. average of pairwise Pearson correlation of fMRI time-series between brain voxels/vertices), people will be first perform a fisher r-to-z transform (essentially the atanh function) to make the values of functional connectivity matrix to be of normal distribution. e.g. homogeneity code from Yeo’s lab.

However why we have to make them normal distributed if we are just interested in homogeneity? (I didn’t find a paper explain this)

I remember when we started doing this “way back when” at Wash U when we first started trying to do group fcMaps and correlation matrices.

The issue is that correlation is not normally distributed (-1 to 1 only) and in the pre-permutation age, this breaks a number of assumptions for statistics that presume normality (e.g. standard t-test models, etc…).

There are several ways to do this, but atanh is simply to code and less biases than several other methods. I remember looking at this paper back then:

SILVER, N. C., & DUNLAP, W. P. (1987). Averaging correlation coefficients: Should Fisher’s z transformation be used? Journal ofApplied Psychology, 72, 146-148.

Often, and to this day, I still do any math on the rFz maps/matrices, and only show the average r values in figures if desired… you have to make this clear in the methods (which most do)