NiMARE decoding - How to refine/restrict neurosynth dataset & how to interpret word weights

Hello all,

I am currently running a continuous decoding script using a statistical unthresholded map and the neurosynth database using the gclda method. I am certainly not the best coder, so my question might be easily solvable for a more experienced NiMARE/python user. How can we restrict our database to some terms/topics/regions? Right now I’m using the whole neurosynth database, but it naturally yields noisy results with futile associations, let alone the computer power needed to run such an analysis. Let’s say I would like to restrict it to cognitive functions/brain regions, how should I modify my code? Here it is:

out_dir = os.path.abspath("/Users/m246120/Desktop/dAD_BPR/Neurosynth_Maps/abstracts/")
os.makedirs(out_dir, exist_ok=True)

files = nimare.extract.fetch_neurosynth(
path=out_dir,
version=“7”,
overwrite=False,
source=“abstract”,
vocab=“terms”,
)
pprint(files)
neurosynth_db = files[0]

neurosynth_dset = nimare.io.convert_neurosynth_to_dataset(
coordinates_file=neurosynth_db[“coordinates”],
metadata_file=neurosynth_db[“metadata”],
annotations_files=neurosynth_db[“features”],
)
neurosynth_dset.save(os.path.join(out_dir, “neurosynth_dataset.pkl.gz”))
print(neurosynth_dset)

counts_df = annotate.text.generate_counts(
neurosynth_dset.texts,
text_column=“abstract”,
tfidf=False,
max_df=0.99,
min_df=0.01,
)
counts_df.head(5)

model = annotate.gclda.GCLDAModel(
counts_df,
neurosynth_dset.coordinates,
mask=neurosynth_dset.masker.mask_img,
n_topics=50,
n_regions=10,
symmetric=True,
)
model.fit(n_iters=10, loglikely_freq=20)
model.save(“gclda_model.pkl.gz”)

topic_img_4d = masking.unmask(model.p_voxel_g_topic_.T, model.mask)
for i_topic in range(5):
topic_img_3d = image.index_img(topic_img_4d, i_topic)
plotting.plot_stat_map(
topic_img_3d,
draw_cross=False,
colorbar=False,
annotate=False,
title=f"Topic {i_topic + 1}",
)

My other question is: how should we interpret word weights? It’s pretty straight forward for correlations as they are embedded within a -1 +1 range, but this is not the case for word weights.

Thank you!

In order to restrict the model’s topics, you can limit the terms being fed in (i.e., the counts_df). I’m a little wary to recommend it, because trying to control what topics can be detected probably reduces the validity of the model. Unfortunately, neither of the original developers are still in academia, so I doubt they’ll weigh in on this. At minimum, I would recommend trying to make sure that the vocabulary (i.e., the “terms” in the counts DataFrame) is representative and unbiased.

One thing I’ve tried in the past is to restrict the vocabulary to a cognitive science-related ontology (namely, the Cognitive Atlas). NiMARE has functions for extracting cognitive atlas terms from text (see NiMARE: Neuroimaging Meta-Analysis Research Environment — NiMARE 0.0.10rc2+11.ge6eb595.dirty documentation). You could replace the generate_counts call with those functions, as long as you ensure that they return counts, rather than tf-idf (or some other) weights. This way, you could be reasonably sure that the terms going into the topic model would be related to cognitive domains, though you wouldn’t control which domains. Unfortunately, there will always be relevant terms that are not captured by any ontology, and the Cognitive Atlas still has large gaps.

One added benefit is that you could then apply hierarchical expansion to leverage the asserted relationships between terms in the ontology before fitting the topic model. Just a thought.

Unfortunately, you’re still restricted by the fact that you’re working with abstracts. There’s no perfect solution, but having access to more text would probably produce between representations of the original papers. You might want to try out NeuroQuery’s dataset. While the NeuroQuery devs can’t share article text for copyright reasons, they do provide annotations (term counts) for a large vocabulary, split up by different sections of the papers. Check out nimare.extract.fetch_neuroquery and https://github.com/neuroquery/neuroquery_data if that interests you.

As for running in just a single brain region… To be honest, I’d never tried running the model on anything other than a full brain mask, so I’m not sure how it would perform, but there is a mask parameter that you could try out. Just to be safe, make sure the mask is at least in MNI space.

The word weights are just the dot products between the topic-voxel weight arrays, the topic-word weight arrays, and the input (ROI or whole-brain map) array. As such, they’re arbitrarily scaled, and there’s really no way to assign statistical significance to them. They are comparable across terms within a given decoded map/ROI, though, so I would recommend reporting the top X number of terms for each decoded map, where you decide what X should be beforehand.

I doubt I’ll be able to describe it as well as the original paper, so to quote Rubin et al. (2017):

Lastly, while our decoding framework is based on probabilistic GC-LDA topics, the outputs it generates cannot typically be interpreted as probabilities, because the input images researchers conventionally seek to decode are mostly real-valued t or z maps whose meaning can vary dramatically. While this restriction limits the utility of our framework, it is, at present, unavoidable. Providing meaningful absolute estimates of the likelihood of different cognitive processes given observed brain activity would require either (a) that researchers converge on a common standard for representing observed results within a probabilistic framework (e.g., reporting the probability of subjects displaying supra-threshold activation in every voxel), or (b) re-training the GC-LDA model and associated decoding framework on a very large corpus of whole-brain images comparable to those that researchers seek to decode, rather than on a coordinate-based meta-analytic database.

One minor note for formatting code on NeuroStars. You can surround your pasted code with three backticks (```) before and after. If you include python after the first set of backticks, it will use syntax coloring based on Python, which would make the code much easier to read.

10 iterations will definitely not be sufficient for a good model fit. In the original package, Rubin recommends at least 1000. Also, running with 10 regions is a little odd. The papers introducing GCLDA only run with 2 symmetric regions. There’s no reason you can’t run with 10, but you should know that I don’t think anyone’s published anything with that many, so I don’t know how the results would look. Hopefully, you end up with meaningful, distributed networks.

I hope that helps.

Best,
Taylor

Taylor,

Thank you so much for this quick and exhaustive reply. This is very helpful!

Nick

Hi Taylor,
I’m still confused about how to restrict the vocabulary for decoding. I found many repeating terms in ‘data-neurosynth_version-7_vocab-terms_vocabulary.txt’, like ‘active’,‘actively’ and ‘activities’. Also I’m not concerned about location terms like ‘occipital cortex’ and ‘superior parietal’. Is there any function to remove these terms from decoding? Unfortunately, the link you mentioned above is broken. If you have any suggestion please tell me. Thank you very much.

Best,
Yunge

Hi Yunge,

NiMARE doesn’t have any functions to reduce the Neurosynth terms, but, if you work from the article abstracts (which should be included in the Dataset if you used this example), then you can use a topic model or the term extraction for a reduced vocabulary, like the Cognitive Atlas.

The current version of the Cognitive Atlas example is here. There are also examples for LDA and GCLDA topic models.

Cheers,
Taylor