Is it possible to use nimare.decode.continuous.CorrelationDecoder for parcellated unthresholded statistical images? My whole analysis is based on parcellated fmri images and corresponding adjacency matrices derived from fiber-tracking. Therefore whatever statistical image I get as output, it will always contain parcel-wise test statistics (all voxels belonging to the same region will share the same test statistic as value). It seems this lies somewhere between correlation based encoding and discrete encoding?
I don’t think I’ve ever tested this behavior, but I believe that you can provide a nilearn masker, including a NiftiLabelsMasker with your atlas for parcellation, to the Decoder. I’d be curious to see if it works without issue.
EDIT: I think the only way to work on an atlas would be to use an image-based meta-analysis estimator, since the coordinate-based methods don’t support label maskers (i.e., atlases).
You could probably parcellate the meta-analysis outputs after fitting the estimator and then reproduce the CorrelationDecoder by looping over labels in the database and running the same estimator, then parcellating those outputs.
Hi Taylor, I think I roughly get it, but scared that I might do something wrong. Could you come up with an MRE? Prepared some code for you:
from joblib import Memory
from nimare.io import convert_neurosynth_to_dataset
from nimare.extract import fetch_neurosynth
from nilearn.datasets import fetch_atlas_schaefer_2018
from nilearn.datasets import load_sample_motor_activation_image
from nilearn.maskers import NiftiLabelsMasker
from nimare.decode.continuous import CorrelationDecoder
from nimare.meta.cbma import mkda
###############################################################################
## Get example data, not part of question
###############################################################################
# set cache object
memory = Memory(location='./',verbose=0)
# can take a while so we cache
# off topic: https://github.com/neurostuff/NiMARE/issues/826#issuecomment-1691947789)
@memory.cache
def get_nimare_dataset(databases):
ds = convert_neurosynth_to_dataset(
coordinates_file=databases['coordinates'],
metadata_file=databases['metadata'],
annotations_files=databases['features']
)
return ds
# get neurosynth data
databases = fetch_neurosynth(data_dir='./')[0]
# convert to NiMARE dataset (can take a while so we cache)
ns_dset = get_nimare_dataset(databases)
# get deterministic example atlas
atlas_bunch = fetch_atlas_schaefer_2018()
atlas_img = atlas_bunch['maps']
# get example unthresholded voxel-wise statistical image
stat_img = load_sample_motor_activation_image()
# parcellate image (assume your upstream analysis outputs a
# parcellated volumetric statistical image)
masker = NiftiLabelsMasker(atlas_img)
stat_img_parcellated_data = masker.fit_transform(stat_img)
stat_img_parcellated = masker.inverse_transform(stat_img_parcellated_data)
###############################################################################
## Question part
###############################################################################
# fit Correlation Decoder to dataset
decoder = CorrelationDecoder(
frequency_threshold=0.001,
meta_estimator=mkda.MKDAChi2,
target_image='z_desc-association',
n_cores=12
)
# off topic: this takes a long time (?)
# Can we cache? https://github.com/neurostuff/NiMARE/pull/845
# Don't see cache argument in docstrings
decoder.fit(ns_dset)
# Actual Question
decoding_results = decoder.transform(stat_img_parcellated) # can we do this?
You could do that, but I don’t think it’s the way to go, since you’re correlating a parcellated image with an un-parcellated image.
I was thinking something more like this (untested, no guarantee it’ll work):
# get deterministic example atlas
atlas_bunch = fetch_atlas_schaefer_2018()
atlas_img = atlas_bunch['maps']
# get example unthresholded voxel-wise statistical image
stat_img = load_sample_motor_activation_image()
# parcellate image (assume your upstream analysis outputs a
# parcellated volumetric statistical image)
masker = NiftiLabelsMasker(atlas_img)
stat_img_parcellated_data = masker.fit_transform(stat_img)
meta_estimator = MKDAChi2()
labels = dataset.get_labels()
for label in labels:
feature_ids = dataset.get_studies_by_label(
labels=[label],
label_threshold=0.001,
)
# Create the reduced label+ and label- Datasets
feature_dset = dataset.slice(feature_ids)
nonfeature_ids = sorted(list(set(dataset.ids) - set(feature_ids)))
nonfeature_dset = dataset.slice(nonfeature_ids)
# Fit the meta-analysis
meta_results = meta_estimator.fit(feature_dset, nonfeature_dset)
feature_img = meta_results.get_map(
"z_desc-association",
return_type="image",
)
# Parcellate the image
feature_img_parcellated_data = masker.transform(feature_img)
# Concatenate feature_img_parcellated_data across features and
# keep track of the associated features
...
# Correlate stat_img_parcellated data against each feature's parcellated data
# and log that in a DataFrame
...
Got it, thanks! Will try this out and let you know
Seems to work (?) I used:
def _corr_with_feature_img(label,ds,masker,src_data):
try:
feature_ids = ds.get_studies_by_label(
labels=[label],
label_threshold=0.001,
)
# Create the reduced label+ and label- Datasets
feature_dset = ds.slice(feature_ids)
nonfeature_ids = sorted(list(set(ds.ids) - set(feature_ids)))
nonfeature_dset = ds.slice(nonfeature_ids)
# Fit the meta-analysis
meta_estimator = MKDAChi2()
meta_results = meta_estimator.fit(feature_dset,nonfeature_dset)
feature_img = meta_results.get_map(
"z_desc-association",
return_type="image",
)
# Parcellate the image
masker_copy = sklearn.base.clone(masker)
feature_img_parcellated_data = masker_copy.fit_transform(feature_img).reshape(-1)
# Correlate stat_img_parcellated data against each feature's parcellated data
# and log that in a DataFrame
return label,spearmanr(src_data,feature_img_parcellated_data)[0]
except:
return label, np.nan
@memory.cache
def correlate_with_feature_imgs(labels,ds,masker,src_data):
corrs = Parallel(n_jobs=12)(delayed(_corr_with_feature_img)(label,ds,masker,src_data) for label in labels)
return corrs
Explanation: Run correlation in parallel (and also cache as this takes a long time). For each label create feature_img
, parcellate this using the original masker
and correlate the resulting data array with data from image that should be decoded (src_data
). Used spearman correlation instead of person.
Warnings:
Although a separate instance of MKDAChi2()
gets created (at least I think?) I get this warning (n times):
WARNING:nimare.utils:Argument <class ‘nimare.meta.kernel.MKDAKernel’> has already been initialized, so arguments will be ignored: memory, memory_level
Notes:
Correlations range from -0.6 to +0.6…Isn’t this unusually high?
That looks good to me.
I think you can just use transform
instead of having to fit again, since the mask and space remain the same across meta-analyses.
That’s odd, but I don’t think it’s a problem.
I don’t use Spearman’s rho very often, but since you have only a couple hundred parcels, it seems reasonable to have pretty high correlations.
Would it make sense to implement that in nimare? That is, being able to provide an atlas masker to CorrelationDecoder that parcellates the feature images?
I don’t see any immediate reason why that shouldn’t be done, but I couldn’t say what (if any) statistical assumptions would be broken with this approach. I would say the largest cause for concern would be how the modelled activation maps are generated/used since they rely on specific spatial assumptions (MKDA being a radius of a circle and ALE being a Gaussian distribution.