Using NiMARE's CorrelationDecoder with Unthresholded ALE Maps

Hi all,

I’m a beginner in neuroimaging and have recently completed an ALE-based meta-analysis on self-referential processing. I now have unthresholded Z-value ALE maps, and I’m trying to decode them using Neurosynth with NiMARE’s continuous.CorrelationDecoder.

Could someone confirm if my approach is correct, and whether these unthresholded maps can be used directly with the Neurosynth dataset? Below is the code I am using. Any guidance would be greatly appreciated!

import nibabel as nib
import os
from nimare.extract import fetch_neurosynth
from nimare.io import convert_neurosynth_to_dataset
from nimare.decode import continuous

self_subj = nib.load('xxx')
out_dir = os.path.abspath('xxx')  
os.makedirs(out_dir, exist_ok=True)

files = fetch_neurosynth(
    data_dir=out_dir,
    version="7",
    overwrite=False,
    source="abstract",
    vocab="terms",
)

neurosynth_db = files[0]
neurosynth_dset = convert_neurosynth_to_dataset(
    coordinates_file=neurosynth_db["coordinates"],
    metadata_file=neurosynth_db["metadata"],
    annotations_files=neurosynth_db["features"],
)
neurosynth_dset.save(os.path.join(out_dir, "neurosynth_dataset.pkl.gz"))

import nimare
neurosynth_dset = nimare.dataset.Dataset.load(os.path.join(out_dir, "neurosynth_dataset.pkl.gz"))
neurosynth_dset.update_path(out_dir)

decoder = continuous.CorrelationDecoder(feature_group=None, features=None)
decoder.fit(neurosynth_dset)
decoded_df1 = decoder.transform(self_subj)
decoded_df1 = decoded_df1.sort_values(by='correlation', ascending=False)
print(decoded_df1)

The code looks reasonable t ome, but if your goal is to decode ALE maps then I think it makes more sense to use ALE as the meta-analytic estimator within the CorrelationDecoder. I’ve never actually tried using a CorrelationDecoder with ALE, but it should work just fine.

I would recommend using the same settings for the ALE estimator that you used for your ALE meta-analysis as well.

from nimare.meta.cbma.ale import ALE

ale_estimator = ALE()
decoder = continuous.CorrelationDecoder(
    feature_group=None,
    features=None,
    meta_estimator=ale_estimator,
    target_image='z',
)

@JulioAPeraza does that sound right to you?

Thank you very much for your help. Before receiving your response, I had already run the code, but encountered the following error (screenshot attached). I’m not sure what caused it.

Additionally, after modifying the code based on your suggestions, a new error appeared. Could you please help me understand the possible causes?

Thank you again!

The first warning, about the MKDAKernel, can just be ignored.

The ALE warning relates to the ALE algorithm, which determines the size of the Gaussian kernel for each study based on the number of subjects in that study’s sample. However, the Neurosynth database doesn’t have information about sample sizes, so you need to set a single sample size to apply to all of the studies. You can specify a single “sample size” to use for all of the studies in the Dataset by change ale_estimator = ALE() to ale_estimator = ALE(kernel__sample_size=30) (30 is probably a fine choice since most studies have small samples).

Thank you very much for your response. After running the following code:

kernel = ALEKernel(sample_size=30) 
ale_estimator = ALE(kernel_transformer=kernel)

decoder = continuous.CorrelationDecoder(
    feature_group=None,
    features=None,
    meta_estimator=ale_estimator,
    target_image='z',
)

When I execute decoder.fit(neurosynth_dset) , I keep seeing warning messages:
WARNING:nimare.utils:Metadata field 'sample_sizes' not found. I would like to know if these warnings can be safely ignored. Thanks again!

That’s fine. That warning will appear even though you set the sample size to 30, but the ALE estimator will use your sample size anyway.

Thank you very much for your clarification! I really appreciate your help!