I recently performed regional homogeneity analysis using CPAC and AFNI’s 3dReHo command. However, upon assessing the results, I saw a stark contrast in the values between these two softwares. AFNI is generally more reserved with its estimation, with values ranging from 0.4 - 0.7 on regions I am interested in. On the other hand, CPAC estimated values ranging from 0.8 to 0.99. Both calculate KCC, so I was wondering where this difference stems from, and which one is preferred? Thank you very much for your kind help.
One thing to check is the neighborhoods being used by each, which might differ for each calculation. Assuming you are using voxelwise calculation, the default in 3dReHo is nearest neighbors NN=3, which means each voxel’s ReHo value is calculated over 27 voxels (self plus face-, edge-, and node-wise neighbors). But you can change that in the program, of course, defining various neighborhood rules and radii, or use ROIs. I’m not sure what CPAC’s program does.
Note that a ReHo value of 0.99 would imply extreeeeeeemely similar time series; you could investigate that by looking at graph window views of the time series in the AFNI GUI, or in any other GUI that has similar functionality.
You’re right that there is a contrast between C-PAC and AFNI’s 3dReHo results despite both tools using KCC - C-PAC uses a stricter method for smoothing the timeseries, which typically results in higher ReHo values, especially in areas where functional connectivity is more homogeneous. C-PAC also typically applies spatial smoothing during preprocessing, whereas AFNI’s 3dReHo may or may not, depending on your settings. This could result in C-PAC giving higher regional homogeneity values in areas with more highly synchronized activity, as smoothing tends to increase correlations between neighboring voxels.
Both C-PAC and AFNI allow for customization of analysis parameters, but they come with different default settings. For instance, 3dReHo in AFNI has parameters like -detrend, -smoothing, and -mask that could influence the results if not set to match C-PAC’s settings. C-PAC has built-in settings that may automatically adjust the time series for motion or physiological noise, which could lead to more consistent (and potentially higher) ReHo estimates, especially in regions that are less affected by motion.
A preference for one tool over the other in this scenario would be decided by your goals for your data here. AFNI’s 3dReHo values could be interpreted as more cautious estimates, especially in regions with weaker or more variable signal coherence, while C-PAC’s could indicate higher local coherence and less noise. Again, these high values might reflect more homogeneity in the time series due to the preprocessing and smoothing steps specified when running C-PAC. You can attempt to eliminate some of the discrepancy between the two by matching your preprocessing and analysis parameters as closely as possible (filtering, smoothing, motion correction, etc.). It would also be a good idea to compare the results from both tools across the brain regions you’re interested in and look for patterns there - are the patterns of high ReHo values consistent between CPAC and AFNI, even if the actual numbers differ? You could also try cross-validating with REST or FSL.
Please let me know if you have any additional questions or I can clarify further!
-Tamsin
@tamsinrogers : thanks for those comments and points.
I was assuming that @smolxd was running two different programs on the exact same input dataset—it would be good to clarify whether that was the case. If the original question refers to outputting ReHo values from two different pipelines with different options chosen—like blur value or detrending method—then you might get different answers. When using two different pipelines where care has been taken to match settings all along the way, such differences should be smaller; but if such matching has not been explicitly done, then the outputs can differ, sure, just like using a single software with different pipeline settings.
I am not sure what a “stricter method for smoothing a time series is”. A user should be choosing their own smoothing explicitly, I would think, to set a value appropriate for their data. For example, using single- or multi-echo data, one would choose different blurs; for different voxel sizes one would likely blur differently; for ROI-base analysis, one probably should not blur at all. In AFNI, we don’t really have a default blur size, but we make some general recommendations (e.g., for single echo FMRI headed into voxelwise analysis, choosing 1.5-2 times the minimal voxel dimension seems reasonable; for ME-FMRI, probably minimal blurring is best; for ROI-based, none should be performed).
Note that 3dReHo does not have a detrend option, nor does it do detrending. It assumes you have processed the data to your liking already separately.
I’m not sure how one could say one software’s overall values could be “more cautious” from this evaluation. Robustness could be determined, but ideally if two programs to calculate ReHo are used on the same inputs, results should be pretty much the same.
But again, a key question first of all is whether the original comparison is when two ReHo-calculating programs are given the same input dataset and have different output numbers, or whether a user has processed the data in two different ways, in which case there should be more of a step-by-step comparison of processing choices.
Thank you @ptaylor and @tamsinrogers for the responses! I did use the same inputs for both pipelines and performed smoothing for both pipelines. Might want to get rid of that since I’m doing ROI-based ReHo analysis, as noted by taylor. Aside form that, all the other options (neighborhoods, mask) are the same.
So, it sounds like you want to compare your processing pipeline options between what you did with CPAC and AFNI to get at the root of the differences, not merely looking at the ReHo programs in each package.
On the AFNI processing side, if you used afni_proc.py, then you have a few ways to check processing options:
the option specifications for each processing block within the afni_proc.py command
the commented “proc” script that afni_proc.py creates, so you have a guided tour through each operation performed
a couple output summary files, like in @ss_review_basic or the out.ss_review.*.txt text file.
You can compare major processing steps that you chose to make sure that they match between the two packages (like blur size or even whether blurring was included—which it sounds like should not be the case here).