Hi all, I’m just starting to learn nimare now. I want to use the decoding function to explain my fMRI results, and I already have several regions of interest. I read the example code but I don’t find any example to make a .json file. If I miss anything, please tell me the tutorial link. Thank you very much.
Could you clarify what you want to do a bit? Most of the time, folks use large-scale databases like Neurosynth, NeuroQuery, or BrainMap to perform functional decoding. In the case of Neurosynth and NeuroQuery, NiMARE has functions to create the JSON files and Datasets. Do you want to use a smaller, manually curated dataset to decode your ROIs instead?
There isn’t much documentation about writing NiMARE JSON files from scratch for two reasons. First, most folks tend to use NiMARE on datasets that are originally stored in a more established format (e.g., Neurosynth or BrainMap/Sleuth files), so they just use functions in NiMARE to convert their existing files to NiMARE Datasets without having to directly work with the JSON files. Second, the NiMARE JSON file format is not finalized. We have been working on a standard called NIMADS, but NiMARE isn’t currently up-to-date with that standard. It doesn’t generally matter when NiMARE can read and write the JSON files without user intervention most of the time. All of that is to say that there isn’t much documentation, but I can perhaps describe the current structure and provide some examples if necessary.
Thank you for your reply. It’s so kind of you to tell me about your opinion. I’d like to use the Neurosynth database for decoding but I can’t find the correct JSON file. The example code used “neurosynth_laird_studies.json” in the resource folder which was commented as “a small dataset composed only of studies in Neurosynth with Angela Laird as a coauthor”, I think there should be a dataset composed of all studies in Neurosyth but I can’t find it in the resource folder or anywhere. I checked the the neurosynth-data repository, there are some JSON files named like " data-neurosynth_version-7_vocab-LDA200_metadata.json", but these files seem like that only discribe the dataset brifely, not look like “neurosynth_laird_studies.json” in the resource folder. Also I run the example code of “02_download_neurosynth.py”, 4 files were downloaded but not any JSON file. I think the data was saved as “neuroquery_dataset_with_abstracts.pkl.gz”, should this file used in decoding? I want to know how can I find the JSON files of Neurosyth or any other large-scale dataset for decoding. Thank you again for your patience. Looking forward to your reply.
nimare.extract.fetch_neurosynth is what you want to use for that. That “Laird” dataset you’re referring to is just for testing/examples.
The files in the data repository are in a different format. There are two functions in
nimare that can convert those files (after they’ve been fetched with the
fetch_neurosynth function) to formats that are usable by NiMARE.
nimare.io.convert_neurosynth_to_json will convert the files to a JSON file, like you’re talking about, but
nimare.io.convert_neurosynth_to_dataset will convert the files to a
Dataset object, which basically skips the JSON step.
That example should also create a file named “neurosynth_dataset_with_abstracts.pkl.gz”, which is what you probably want. That file is a saved Dataset with the Neurosynth database. You can load it with the class method
Dataset.load, and then use it for decoding.
Got it. Thank you very much.