AFNI 3dFWHMx output

Dear experts,

I wanted to ask a question about AFNI’s 3dFWHMx functionality.
I am using it to estimate the smoothness of my raw nifti data and I was wondering what kind of ACF model parameters as defined in AFNI (see below)

a*exp(-r*r/(2*b*b))+(1-a)*exp(-r/c) plus effective FWHM

could one expect? More specifically, I was wondering whether they should be similar to my voxel size in each direction??

Thanks a lot in advance!

Hi, @meka -

More description about the mixed ACF modeling is provided here:
FMRI Clustering in AFNI: False-Positive Rates Redux
See Eqs. 1-3 and surrounding text for a description, and Fig 3 for a comparison with the old Gaussian modeling.

When you run 3dFWHMx, you can output a plot of the average ACF, which also shows what the older Gaussian value would be, for comparison. This is a useful way to get a sense of what the estimated “size scale” of the blur of your residuals dataset is.

The size scale of a Gaussian is typically summarized by the std dev (sigma) value, but with the mixed ACF it is much harder to do so. The mixed ACF will tend to have a weighter tail (slower decrease to zero) than the Gaussian ACF. In the above paper, it was noted that the ““Full-Width at Quarter-Maximum” (FWQM)” might be a useful way of characterizing the distance scale of the autocorrelation (because that would more appropriately reflect the tail feature than FWHM). One might look at which radius the autocorrelation is 0.5—in the AFNI Bootcamp dataset of simple task data (without REML used), that occurs about r=4.0mm for both the Gaussian and mixed ACF models (that dataset was acquired at voxel = 2.75x2.75x3.00 mm). In the Bootcamp example, the FWQM for mixed ACF was about 6.25mm (and about 5.5mm for the old/unused Gaussian approx).

I think it is hard to generalize what would be a common value mixed ACF value. It would likely depend a bit on whether you are performing REML, as well, to remove temporal autocorrelation structure. It surely also depends on acquisitions—multislice/accelerated data might have more autocorrelation structure, perhaps; field strength will matter, etc.


1 Like

Hello @ptaylor,

Apologies for my late reply- for some reason I did not get a notification of the reply!

Thanks a lot for the detailed reply, this is very helpful.
I will read your paper in detail and may possible ask a few more questions!

Thank you very much again.

Hello @ptaylor ,

Thank you very much again for the helpful reply.
I do have a few more questions and it would be great if I may have your input on them.

  1. I have highly anisotropic voxels (1 x 1 x 5 mm3), do you think this may constitute a problem for the 3dFWHMx calculations for some reason?

  2. May I ask how the radius that is used for the ACF calculations is determined? In the function help, it says: “By default, the ACF is computed out to a radius based on a multiple of the ‘classic’ FWHM estimate.” I could not understand how this value is determined, but I did notice that I cannot set a radius smaller than the radius automatically calculated by AFNI, but only higher than it. Apologies in advance if the answer is obvious and I might have missed it!

  3. In my dataset consisting of 45 participants (I have used the raw data with the detrending flag), I have compared the smoothness estimates calculated with the ACF vs the classical estimation. To my surprise, the values I obtained for the classical method were slightly and consistently higher compared to the ACF output. Interestingly, I have seen a similar pattern when I used a data set with different acquisition parameters (higher field strength, smaller voxel size). But, should this not be the other way around as the classical method underestimates the smoothness?

Thanks a lot in advance!!

Hi, @meka-

Re. Q1: Well, yes, that will probably be a somewhat odd dataset on which to do spatial calculations, such as 3dFWHMx. The spatial smoothness estimated is a function just of radius, so averaged over all angles, as it were. The strong heterogeneity of voxel dimension will create some oddities for calculations.

Re. Q2: Some gory details here. If no radius is specified by the user, then the calculation of ‘acf_rad’ is the larger of the following two quantities:
A) 2.999f * ccomb, where is geometric mean of each subvolume’s FWHM estimates; could also be the arithmetic mean of these, if the -arith opt is used.
B) 3.999f * cbrtf(dx*dy*dz), which is basically 4 times the cube root of the voxel dimensions.
… and indeed, the code is setup that one cannot choose a smaller radius than “B”, even from the command line.

Re. Q3: Well, that is interesting. I have not observed data in the wild that had that. I assume this comparison comes from the image of the red and green line (with data) being shown? Is the ACF a good fit to the data line? The “mixed ACF” model is essentially a particular generalization of a Gaussian: a weighted sum of a Gaussian and an exponential decay. I suppose it is theoretically possible to have the mixed ACF have ‘shorter tails’ than a Gaussian (or perhaps “leptokurtic” would be a more accurate term?), but I have not seen this. I suspect that the severe anisotropy might have something to do with this—but I am certainly not sure. To be sure, is this residuals that you are inputting, or some other kind of data?


1 Like

Hello @ptaylor

First of all, thank you very much for the helpful and quick replies!

Q1. I see, good to know! Actually, what we would like to do is to use this function on the same dataset (with this anisotropic resolution), before and after certain denoising steps and see how these denoising steps impact the smoothness of the data. Would this still be a valid comparison, or would you rather suggest not using this function on such data?

Q2. Thank you very much for the technical explanation!

Q3. I have observed this difference when I used the -ShowMeClassicFWHM option (please see the stdout from the function below for one participant, with the first line showing the classic/old estimation, according to the function’s help). I am interpreting these data as the classic/old estimation giving me a larger FWHM estimate – is that correct?

0.857931  0.863189   3.56048    1.38152
0.863087  0.328694   2.3981     1.0597

I used the tool both on the raw data (with the detrending option) and on data after model estimation (residuals from FSL FEAT), with very similar results. Here, I am attaching an example plot (the same subject I posted the stdout for).

I also would like to mention that, I have seen this pattern consistently across >40 subjects and also with different data-sets (e.g. 3T vs 7T data).

Thank you very much again.
Kind regards