An error in neurosynth topic-based decoding via NiMARE

Hi there,
I’m trying to replicate the image decoder feature using NiMARE to obtain correlations of my image of interest with terms in the Neurosynth database. However, I encountered some error messages. I did some research and it seems that no one has encountered such a situation, here is my code.

import os
from pprint import pprint


import nimare
from nimare.decode import continuous
from nimare.extract import download_abstracts, fetch_neuroquery, fetch_neurosynth
from nimare import dataset, meta



out_dir = os.path.abspath("/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive")
os.makedirs(out_dir, exist_ok=True)

files = fetch_neurosynth(
    data_dir=out_dir,
    version="7",
    overwrite=False,
    source="abstract",
    vocab="LDA50",
)
neurosynth_db = files[0]
pprint(neurosynth_db)
# Note the "keys" file. That has the top 30 words for each topic.
# It *doesn't* go in the Dataset at all though.

# Get the Dataset object
neurosynth_dset = nimare.io.convert_neurosynth_to_dataset(
    coordinates_file=neurosynth_db["coordinates"],
    metadata_file=neurosynth_db["metadata"],
    annotations_files=neurosynth_db["features"],
)

neurosynth_dset.save(os.path.join(out_dir, "neurosynth_dataset.pkl.gz"))
print(neurosynth_dset)

neurosynth_dset = dataset.Dataset.load("/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth_dataset.pkl.gz")

# Initialize the Estimator
decoder = continuous.CorrelationDecoder(feature_group=None, features=None)
decoder.fit(neurosynth_dset)
decoder.transform("/home/Wsh/FSK/NMH_2025/res/bsnip/show_gradients/FCS_gradients_1.nii")
decoder.to_csv("/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth_dataset_slice_decoded.csv")

INFO:nimare.extract.utils:Dataset found in /home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth

INFO:nimare.extract.extract:Searching for any feature files matching the following criteria: [('source-abstract', 'vocab-LDA50', 'data-neurosynth', 'version-7')]
Downloading data-neurosynth_version-7_coordinates.tsv.gz
File exists and overwrite is False. Skipping.
Downloading data-neurosynth_version-7_metadata.tsv.gz
File exists and overwrite is False. Skipping.
Downloading data-neurosynth_version-7_vocab-LDA50_keys.tsv
File exists and overwrite is False. Skipping.
Downloading data-neurosynth_version-7_vocab-LDA50_metadata.json
File exists and overwrite is False. Skipping.
Downloading data-neurosynth_version-7_vocab-LDA50_source-abstract_type-weight_features.npz
File exists and overwrite is False. Skipping.
Downloading data-neurosynth_version-7_vocab-LDA50_vocabulary.txt
File exists and overwrite is False. Skipping.
{'coordinates': '/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth/data-neurosynth_version-7_coordinates.tsv.gz',
 'features': [{'features': '/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth/data-neurosynth_version-7_vocab-LDA50_source-abstract_type-weight_features.npz',
               'keys': '/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth/data-neurosynth_version-7_vocab-LDA50_keys.tsv',
               'metadata': '/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth/data-neurosynth_version-7_vocab-LDA50_metadata.json',
               'vocabulary': '/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth/data-neurosynth_version-7_vocab-LDA50_vocabulary.txt'}],
 'metadata': '/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/data/raw_congnitive/neurosynth/data-neurosynth_version-7_metadata.tsv.gz'}
WARNING:nimare.utils:Not applying transforms to coordinates in unrecognized space 'UNKNOWN'
Dataset(14371 experiments, space='mni152_2mm')
 10%|████▍                                       | 5/50 [03:19<29:55, 39.90s/it]
Traceback (most recent call last):
  File "/home/Wsh/FSK/NMH_2025/code/fcs/common/surrogate_maps/Cognitive_analysis_code/code_for_cognitive_analy.py", line 46, in <module>
    decoder.fit(neurosynth_dset)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/decode/base.py", line 108, in fit
    self._fit(dataset)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/decode/continuous.py", line 197, in _fit
    maps = {
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/decode/continuous.py", line 197, in <dictcomp>
    maps = {
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/tqdm/std.py", line 1181, in __iter__
    for obj in iterable:
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/joblib/parallel.py", line 1847, in _get_sequential_output
    res = func(*args, **kwargs)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/decode/continuous.py", line 225, in _run_fit
    meta_results = self.meta_estimator.fit(feature_dset, nonfeature_dset)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/meta/cbma/base.py", line 930, in fit
    self._collect_inputs(dataset2, drop_invalid=drop_invalid)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/estimator.py", line 59, in _collect_inputs
    data = dataset.get(self._required_inputs, drop_invalid=drop_invalid)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/nimare/dataset.py", line 435, in get
    results[k] = pd.concat(results[k])
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/pandas/core/reshape/concat.py", line 382, in concat
    op = _Concatenator(
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/pandas/core/reshape/concat.py", line 445, in __init__
    objs, keys = self._clean_keys_and_objs(objs, keys)
  File "/home/Wsh/ZYT/miniconda3/envs/test_env/lib/python3.9/site-packages/pandas/core/reshape/concat.py", line 507, in _clean_keys_and_objs
    raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate

Process finished with exit code 1

Hi @Songke_Fang ,

My apologies; we do not have sufficient documentation in NiMARE for the appropriate settings for the different combinations used in training these models. I will work on adding that soon.

For the LDA-based decoder, I recommend applying a threshold of 0.05 to select studies (frequency_threshold=0.05). The default value of 0.001 only applies to features with TF-IDF values.

decoder = continuous.CorrelationDecoder(feature_group=None, features=None, frequency_threshold=0.05)

Best,
Julio A.

Many thanks for your replying. The code is currently running and I hope it works properly. I have another question: My data is in MNI152 space with 3mm voxels, while the maps I extracted from Neurosynth appear to be 2mm. Could this difference in resolution affect my decoding results, or do I need to perform additional processing steps (e.g., resampling)? If so, what approach would you recommend to handle this discrepancy?

That’s fine as long as it’s in MNI152. The NiftiMasker’s transform() should handle resampling to 2mm (the resolution of the mask in a NiMARE Dataset object) internally.

Hi, Julio A.
Thank you once again for your detailed explanation and I have already obtained the result that I expected.

best,
Songke Fang.