Hi I noticed that most published ALE meta-analyses, do not explicitly mention the limitation that only studies with significant clusters are included the analysis. Neither do they mention the number of studies with non-significant clusters that was excluded
How big of a problem is this, is there a proper way to resolve this?
The challenge with a crude meta-analysis lies in two key areas. First, applying arbitrary thresholds to filter results can lead to the classic issue of “double dipping,” which distorts the findings of the meta-analysis—a problem seen in the NARPS study and discussed in this recent paper. Second, the absence of reported effect magnitudes, a common practice in neuroimaging, further complicates the ability to conduct a rigorous and accurate meta-analysis.
I think there are two distinct concerns here. The first is that neuroimaging meta-analyses don’t (generally) take non-significant studies into account. The second is that the ALE algorithm doesn’t use non-significant studies. The former is the result of publication bias, selective reporting, and the fact that very few people share unthresholded statistical maps in open databases like NeuroVault. The latter is just a limitation of the ALE method.
Studies without foci (significant clusters) don’t do anything in the ALE method, unlike, say, the MKDA chi-squared algorithm, so you can consider it not just a flaw in the analysis itself, but in the algorithm being used. There are workarounds, like the method from Acar et al. (2018), which I believe iteratively adds in random foci sampled across the brain mask to account for publication bias, but I don’t think any method like it has been directly incorporated into any software implementing the ALE method. Of course, ALE generally outperforms MKDA chi-squared (at least based on papers I’ve read, like Salimi-Khorshidi et al., 2008), so that one limitation might not be enough reason to switch from ALE to another method. Neither ALE nor MKDA chi-squared can use effect magnitudes though, so again there are limitations to the methods themselves.
When you have access to unthresholded statistical maps, then you can use image-based meta-analysis methods, which use non-significant results. I strongly recommend image-based meta-analysis over coordinate-based meta-analysis when viable, as with the NARPS dataset. However, most studies report coordinates, so we’re mostly stuck with coordinate-based meta-analyses. There are some coordinate-based meta-analysis approaches that can use effect magnitudes or combine statistical maps and coordinates across studies, such as SDM, but those have their own issues.