Applicability of jackknife analysis in ALE meta-analysis?

Hello everyone!

I am conducting a meta-analysis using ALE (Activation Likelihood Estimation) and am interested in assessing the stability of the results. During my literature review, I found that some studies have used jackknife (or “leave-one-out”) analysis to test the robustness of ALE meta-analytic results. For instance, I came across examples (such as Figure 9 in some publications) where each analysis was repeated by omitting one study at a time to check if the results depended on any particular study. This approach seems to offer a way to assess the stability of findings.

However, in Acar et al. (2018), “Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI,” the authors seem to advise against using jackknife analysis with ALE.

I am somewhat confused, as some studies have applied this method, which suggests there might still be some applicability of jackknife analysis in ALE meta-analyses. Therefore, I would like to ask the following questions:

  1. Is there a theoretical basis or other rationale for not recommending the use of jackknife analysis in ALE?

  2. Is the fail-safe N method sufficient for assessing the robustness of ALE results?

1 Like

Hello!

I forwarded your message to my collegues to see if they have any tips.

I did find this article: https://www.sciencedirect.com/science/article/pii/S1053811923005347

Which suggests that jacknife may overestimate the robustness of certain clusters, because a cluser may survive the jacknife method while only being supported by a small number of studies.

-Alejandro

4 Likes

@tsalo recommends taking a look at FocusCounter: NiMARE: Neuroimaging Meta-Analysis Research Environment — NiMARE 0.3.0+12.g78b79f4.dirty documentation

The jackknife is a predecessor of the bootstrap, and is a reasonable way to characterise the uncertainty of an estimator. However, some quick reading suggests that the jackknife is not as robust as the bootstrap, and doesn’t work well with estimators that are very nonlinear functions of the input data and/or highly skewed. As ALE scores are all positive and quite skewed, I’m not sure how well it will work. It deserves a thorough evaluation, and it looks like the Frahm et al. (2023) study that Alejandro referenced attempts this, but for the specific setting of VBM (and, at that, it didn’t seem to do a great job).

Sorry to not have a more authoritative answer!

Frahm, L., Satterthwaite, T. D., Fox, P. T., Langner, R., & Eickhoff, S. B. (2023). ALE meta-analyses of voxel-based morphometry studies: Parameter validation via large-scale simulations. NeuroImage, 281(September), 120383. Redirecting

1 Like

Hi Alejandro,

Thank you for forwarding my question to your colleagues and for sharing that article! I really appreciate you pointing out the issue with the jackknife method possibly overestimating the robustness of certain clusters.

Additionally, I found two articles where the authors used NiMARE to perform both jackknife and fail-safe N analyses on ALE results. The code in these articles seems quite similar, and they are:

  1. https://www.sciencedirect.com/science/article/pii/S1053811921007114
  2. https://www.sciencedirect.com/science/article/pii/S0149763424000538

Previously, to assess the robustness of my results, I referenced the code from these two articles. However, I couldn’t help but start questioning the effectiveness of the jackknife method for evaluating ALE results.

I’m very grateful for your and @tsalo’s insights, and I will try the FocusCounter method in NiMARE.

Thank you for the helpful insight! I will read the Frahm et al. (2023) paper to learn more about the bootstrap method.