I was looking at using nimare.decode.discrete.Neurosynthdecoder to decode a ROI on the cortical surface. To that end, I had projected the surface ROI to the cortical ribbon in the volume. My intention was to supply this volume to the Neurosynth decoder. However, as this ROI can only occur in gray matter, the resulting terms should be driven, in part, by gray/white matter differences right?
Is my assessment correct and, if yes, is there any way to limit the decoder to only consider a subset of voxels (e.g. only voxels in the cortical ribbon)?
I wouldn’t say that the results would be driven by gray/white matter differences. At least not in the same way that they would be if you decoded a map where gray matter values are high and white matter values are low (or vice versa). Rather, since NiMARE’s decoders should be able to accept masks, white matter voxels would just not be considered at all in the decoding process.
Does that make sense/answer your question?
Yes, the Decoders should accept a mask. There’s no direct support for surface data, but applying a volumetric cortical mask should work just fine.
Thanks for the quick response - I had assumed the mask parameter would be in the decoder class and, as I couldn’t find it, assumed it didn’t exist. But I just found it. Just to be sure I’m doing this right, I should add the ribbon mask with the Dataset, right?
I have to apologize; I think I misunderstood what you were doing. I totally misread your initial post as using the continuous decoder. Please disregard my initial response, for the most part.
In your use-case, you have an ROI within the gray matter. When you use the Neurosynth discrete decoder, you will first select all studies with at least one focus in the ROI using Dataset.get_studies_by_mask(). If you want to be sure that you are only comparing studies with at least one focus in that ROI against ones with at least one focus in gray matter except for that ROI, you can do something like this:
You don’t want to set your Dataset’s mask to the cortical ribbon (unless you also plan to do a voxel-wise analysis like a meta-analysis), since the Decoder doesn’t select the comparison set of studies based on its mask.
I think I was a little confused too; given that (as far as I found) the examples on ReadTheDocs only show the discrete decoder I wasn’t sure whether the continuous decoders were supported yet. Are they?
If they are, I may actually be better off reframing my use-case to the continuous decoder. Am I correct that for a ‘Neurosynth-style’ decoding I should be using CorrelationDistributionDecoder? If yes, I don’t see a masking parameter for this class - where would I do this?
They’re supported, but the continuous decoders take longer and are too computationally intensive to run as an RTD example.
The CorrelationDistributionDecoder only works if you have an image for each study in the Dataset (e.g., if you have a more image-based Dataset, instead of a coordinate-based one). Neurosynth’s online decoder matches up with the CorrelationDecoder.
I should also note that, while the CorrelationDistributionDecoder works, it isn’t an established approach. Most of the time, you don’t have study-wise images for a big enough sample to support decoding, and at that point it might be better to try something more… principled with a tool like nilearn.
If you use the CorrelationDecoder, then it runs a meta-analysis for each feature in the Dataset’s annotations, depending on what you feed in for feature_group and features. The meta-analyses would be restricted based on the Dataset's mask, so in that case you would want to set the mask of the whole Dataset to the cortical ribbon.
The Decoder accepts an initialized Meta object, so you could also create your argument ahead of time with a mask.
meta = MKDAChi2(mask="gray_matter_ribbon_mask.nii.gz")
decoder = CorrelationDecoder(meta_estimator=meta)