Sanity check NiMARE analysis ALE

Hi,

In gingerALE I would use the following settings if I would analyse a single dataset: cluster-level FWE value of 0.05, permutations 10000, p value of 0.001 which would mean that I would be using a cluster-based FWI correction using 10k permutations with a cluster forming threshold of p<0.001 and a cluster level correction of p<0.05.

However doing the above analysis in gingerALE is a bit of a nightmare as it takes forever and therefore I would like to use NiMARE to do a similar analysis. I would really appreciate a short sanity check to ensure that I’m doing what I think I’m doing.

My current pipeline consist of the following steps:

  1. Use convert_sleuth_to_json() to get my sleuth formatted text file to a json format
  2. Fit the json dataset with nimare.meta.cbma.ale()
  3. Apply family-wise error rate correction with nimare.correct.FWECorrector(method= ‘montecarlo’, voxel_thresh = 0.001, n_inters= 10000, n_cores=7)
  4. Save all the resulting maps with cres.save_maps()
  5. Extract the cluster table using nilearn.reporting.get_cluster_table(the resulting zmap, 3.291)

If I understand the boiler text in NiMARE I would basically end up with the following text to describe my analysis (besides providing a link to the actual notebook):

A cluster-forming threshold of p < 0.001 was used to perform cluster-level FWE correction. 10.000 iterations were performed to estimate a null distribution of cluster sizes, in which the locations of coordinates were randomly drawn from a gray matter template and the maximum cluster size was recorded after applying an uncorrected cluster-forming threshold of p < 0.001. The negative log-transformed p-value for each cluster in the thresholded map was determined based on the cluster sizes.

Any steps that I’m missing in my NiMARE pipeline so that I would end up with an analysis similar to the gingerALE analysis as I described in the beginning?

Thanks a lot for any thoughts,
Max

Hi Max,

You can just go straight to the Dataset object with nimare.io.convert_sleuth_to_dataset().

This will also eliminate the need to create a Dataset object from the json, which is a step that was missing in your writeup.

You can also skip steps 1-4 if you use the command-line interface’s ale subcommand.

The cluster-level FWE-corrected z-map should have a cluster-level z-value for all voxels in each cluster, which means that the threshold you want there should be the one-sided 0.05, so I believe that the threshold you want here is ~1.645 instead. Just make sure that you grab the map that has corr-montecarlo and level-cluster in the name.

The steps should generally be good, with the minor caveats I mentioned above. I would recommend using the most up-to-date version from GitHub instead of installing through pip, since we haven’t made a release in a while and there have been a number of minor fixes and improvements since the last one.

I hope that helps and please follow up with any more questions!

Best,
Taylor

Hi Taylor,

Awesome, thanks for the quick and super helpful reply!

Best,
Max

BTW I just released v0.0.4 so if you’re not a regular git/GitHub user you can install with pip install NiMARE==0.0.4 more easily.

Hi Taylor,

I updated Nimare to 0.0.4 (pip install git+https://github.com/neurostuff/NiMARE.git)

and tried the following in the terminal:
nimare ale --n_iters 10000 --v_thr 0.001 --n_cores 7 Input.txt
(where Input.txt is my sleuth formatted file)

but then I get the error:
‘FileNotFoundError: No such file or no access: ‘/Users/mckitchen/miniconda3/envs/py36/lib/python3.6/site-packages/nimare/resources/templates/MNI152_2x2x2_brainmask.nii’’

Which based on my local folder content of ‘resources/’ makes sense as there is no folder called ‘templates/’ in there. The command line works if I copy the ‘templates’ folder and its content from GitHub to my local nimare folder.

Best,
Max

Hm… that’s odd. I think it’s a packaging issue (I can never seem to get Python packaging working perfectly). Since this is a bug, I’ve opened an issue in the repository (neurostuff/NiMARE#426). I’ll try to resolve it there.

It should be good now. There’s no release associated with the fix yet but you can install the way you did before (pip install git+https://github.com/neurostuff/NiMARE.git).

jep, worked like a charm. Thanks!
M

Hi Taylor,

Thanks for all the help already, I have two final questions:

  1. how do I cite the NiMare package?
  2. I’m not sure if I understand the output of the subtraction analysis. Reading the documentation, NiMare is doing it in a slightly different way than what GingerALE would do (Nimare does all voxels, gingerALE only the voxels that survived the initial contrast).

If I have:

  • two sleuth input files,
  • use ‘nimare ale inputA.txt --file2 inputB.txt --n_iters 10000 --v_thr 0.001 --n_cores 7’,
  • I end up with what seems like an uncorrected z-map difference of the two input files (‘subtraction_z_desc-group1MinusGroup2’ file) where positive Z values indicate higher values for inputA en negative Z values indicate lower values for inputA.

How would I go about to end up with a list of thresholded clusters that would correspond to a similar output as what you would get with GingerALE (i.e. inputA>InputB cluster contrast)?

Again, thanks for all the help.
Best,
Max

We’re working on a paper, but for the moment the best way to cite NiMARE is to use the Zenodo DOI.

Yes, in our original implementation, the subtraction algorithm (1) selected significant voxels from the Input 1 meta-analysis, (2) ran a subtraction analysis limited to just Input 1 > Input 2, (3) selected significant voxels from the Input 2 meta-analysis, (4) ran a subtraction analysis limited to Input 2 > Input 1, and (5) combined the two difference maps.

However, in neurostuff/NiMARE#242, we decided to drop the voxel selection step because that step is somewhat circular (and could theoretically bias the results), and because analyzing the whole brain is much faster.

I wrote the initial NiMARE implementation based off of a MATLAB implementation kindly shared by Simon Eickhoff. It’s been a long time since I used GingerALE for a subtraction analysis. Could you clarify what you mean here?

You should be able to run the z_desc-group1MinusGroup2 map through nilearn’s get_cluster_table function.

@MCKeuken please be aware that I received a bug report for 0.0.4 this morning pertaining to cluster-level FWE-corrected CBMA (including ALE) results. Specifically, they’re going to be empty with 0.0.4. I’m going to push a fix and make a new release that also includes the packaging fix from your own bug report.

Hi Taylor,

Thanks for the heads up and for the quick fix!

I’ve noticed some strange behaviour with get_cluster_table for the ALE cluster corrected output where for a few clusters the get_cluster_table peak value output is equal to 0. If you look at the provided coordinate in relationship to the cluster corrected output there doesn’t seem to be a corresponding cluster at that location.

What I think happens is that the resulting clusters are binary (only single z-value per cluster) which means that you will get the center of mass (and not the peak coordinate because there is none) of that cluster using get_cluster_table. Depending on the geometric shape of the actual cluster, the resulting coordinate might be outside of the cluster (extreme example would be a donut shaped cluster). If you then want to link that coordinate to a given structure you might assign an anatomical region that doesn’t actually overlap with the spatial location of the cluster.

A tool that might deal with this is the atlas reader (https://github.com/miykael/atlasreader) by Michael Notter where he provides the overlap of the entire cluster (instead of just the peak coordinate) with a given atlas.

Yes, that is the expected behavior. Cluster-level inference means that the cluster itself is what has an associated probability, so peaks/subpeaks are not meaningful. We made that explicit in NiMARE by setting the whole cluster’s value with its cluster-level significance.

Ah, yes. This is an unfortunate possibility in cases like this. I do recall someone was working on a method to get meaningful centers of mass in weird shapes like that at a hackathon in 2019. I think it might be Chris Rorden’s DistanceFields tool. It’s written in Pascal, but someone savvy could probably translate it to Python. If someone does get a Python version working, then it’s probably worth opening a feature request issue in the nilearn repository.

I think that would definitely help with the labels, but it wouldn’t identify a good summary coordinate (i.e., center of mass or peak location) for the table, right?

I haven’t seen that tool before, although I did write something that should work similarly a while ago (see here), which I used as the basis for a number of changes I got merged into get_clusters_table. I proposed toggling voxel-level vs. cluster-level summary approaches (e.g., peak vs. mean values, peak vs. COM coordinates) in get_clusters_table in nilearn/nistats#166, but there wasn’t much interest. Perhaps it would be worth bringing up again now that there’s a solid use-case.

Ultimately, I think this is a nilearn bug (or potential feature request) that merits an issue on their repository. Would you be willing to open an issue? If you can share the nifti file and example cluster table, that would help the devs replicate the issue and hopefully come up with a solution.

Hi Taylor,
Yes I agree and I’ll make a request at nilearn.
Cheers
Max

Hi, Taylor

For ALE based subtraction analysis, NiMARE determines the p-value based on voxel-wise null distributions, and then transforms p-values to z-values, and only returns the z-map of differences between two datasets. The z-map seems to be unthresholded. In such situation(lack of p-value map), how can we correct our subtraction analysis and obtained significant results?

Kinds,
Ray

Hi Ray,

The ALESubtraction should return a p_desc-group1MinusGroup2 map along with the z-statistic map. However, those p- and z-values are not corrected for multiple comparisons. I assume that is your concern?

Unfortunately, there’s no solid multiple comparisons correction approach for the ALE subtraction analysis, at the moment. Although, to be fair, the standard ALE subtraction analysis doesn’t include multiple comparisons correction either, but it does take steps that help reduce likely false positives.

In the standard ALE subtraction analysis, they limit the analysis to voxels that are significant in the main effects analyses. I.e., for the contrast of Dataset A > Dataset B, only look at voxels that are significant in a standard ALE meta-analysis of Dataset A, and vice versa. Other than that difference, NiMARE’s approach and the standard approach should be equivalent.

If you want to replicate that approach, you can run an ALE + multiple comparisons correction on each of the individual Datasets, and then mask the subtraction results with those. Below is some (untested) code that should roughly accomplish that.

# First, run the subtraction
sub = nimare.meta.cbma.ale.ALESubtraction(n_iters=10000)
res_sub = sub.fit(dset1, dset2)

# Next, run ALEs on the two different Datasets
ale = nimare.meta.cbma.ale.ALE(null_method="approximate")
res1 = ale.fit(dset1)
res2 = ale.fit(dset2)

# Apply multiple comparisons correction to the ALEs
corr = FWECorrector(method="montecarlo", voxel_thresh=0.001, n_iters=10000)
cres1 = corr.transform(res1)
cres2 = corr.transform(res2)

# Get the maps you need
img_sub = res_sub.get_map("z_desc-group1MinusGroup2")
img1 = cres1.get_map("z_level-cluster_corr-FWE_method-montecarlo")
img2 = cres2.get_map("z_level-cluster_corr-FWE_method-montecarlo")

# Threshold and binarize the ALE maps
from nilearn import image
img1_bin = image.math_img("img > 1.65", img=img1)
img2_bin = image.math_img("img > 1.65", img=img2)

# Limit subtraction to significant voxels from original meta-analyses
# NOTE: This is a simplification by combining the two comparisons, but I think it's fine.

# Create combined ALE mask
import numpy as np
ale_mask = image.math_img("np.maximum(img1, img2)", img1=img1_bin, img2=img2_bin)

# Now limit those voxels
img_sub = image.math_img("subtraction * mask", subtraction=img_sub, mask=ale_mask)

I hope that helps.

Best,
Taylor

Hi, Taylor

Huge thanks for your codes and they work fine for searching the significant regions. I still have a tiny confusion about the above codes: when defining the threshold-mask, the voxels with z-value>1.65 were selected, and is this necessary for the final threshold? While the results of single meta-analysis was already FWE corrected (in above codes, the FWE corrected maps were used), and why do we just select the voxels with z-value>0?

Kinds,
Ray

Hi Ray,

The z- and p-values are corrected for multiple comparisons, but they’re not thresholded.

For the cluster-level corrected maps, there is first a cluster-defining threshold (voxel_thresh) that creates clusters. The cluster-level correction determines the p-value associated with each resulting cluster, so the output has corrected p-values for its clusters, but that includes all clusters, not just the ones that would be significant based on your chosen FWE threshold.

For the voxel-level corrected maps, the p-values are just scaled based on the correction. The p-values and z-values will be corrected without being thresholded.

Therefore, when you want to interpret your results, you need to threshold the resulting maps based on the alpha level you’ve chosen, which will often be 0.05. For z-maps that would correspond to 1.65 for a one-tailed test, which ALE is.

Best,
Taylor