I would like to create surrogate maps for my statistical image. The image is volumetric, parcellated, and contains both cortical and subcortical structures. Can anyone recommend toolboxes that can handle these specific properties?
Hi @JohannesWiesner,
I am not sure what you mean by surrogate maps, but recently spin tests have been proposed for null models of brain maps. https://www.sciencedirect.com/science/article/pii/S1053811918304968?casa_token=MZZE5O6D60EAAAAA:PF8hmNzvXOqOb7yS4Htu5COO2qxxwqiQZaGYJ2Shf6qHt87WVAoEZl2ZQv4aAvyyZmHLYQtwYIiK and see this toolbox: GitHub - netneurolab/neuromaps: A toolbox for comparing brain maps
Best,
Steven
@Steven: Thanks for the hint! I was able to use neuromaps.nulls.burt2020
to generate null-models based on my statistical image. This is one of the available functions when dealing with volumetric (and parcellated) data (not sure if it can handle subcortical data, but I don’t see why this should be a problem, since we are in volumetric space anyways?). This function is supposed to return a:
Generated null distribution, where each column represents a unique null map
The function does return a matrix (takes very long, as described in the docs) but my intuitive expectation was that each column would have the same values as my input array just in different order? That would be my naive idea of what spatial permutation means? We “shuffle” the data but keep the spatial autocorrelation?
Maybe @rmarkello has an answer?
Here’s code to reproduce the analysis:
from neuromaps.nulls import burt2020
import numpy as np
from nilearn.image import load_img
from nilearn.plotting import plot_stat_map,plot_roi
# load the values from our statistical image. This is a vector with T-values
# for our 376 regions (360 cortical, 16 subcortical). Note, this image is
# also available as nifti-file (t_stat_img.nii) but burt202 wants to have an array as soon
# as you provide an atlas image
t_stats = np.load('t_stats.npy')
# load the corresponding atlas image. This image has 376 unique values (0
# represents background). Corresponding contral-lateral parcels (e.g. left V1,
# right V2, have different values)
atlas_img = load_img('glasser_tian.nii.gz')
# we can plot our atlas and our statistical image
t_stats_img = load_img('t_stats_img.nii')
plot_stat_map(t_stats_img,cut_coords=(0,0,0),draw_cross=False)
plot_roi(atlas_img,cut_coords=(0,0,0),draw_cross=False)
# and we can check that resolution is right
print(t_stats_img.header.get_zooms())
print(atlas_img.header.get_zooms())
# generate surrogate maps (just 2 for as a test). This can take very long!
# Would recommend to let this run overnight
nulls = burt2020(data=t_stats,atlas='mni152',density='1mm',parcellation=atlas_img,n_perm=2,n_proc=20)
# Expectation would be that each column of nulls contains the same values
# as t_stats, just in different order?
and here’s the input data:
https://drive.google.com/drive/folders/1A-eQY8KTARSeq3DgavB7PrI1V9yXHoa5?usp=sharing
In my understanding, the difference between spatial nulls and original data values is what distinguishes it from simulations generated by random sampling. Additionally, the reason for using this method to generate spatial nulls is that volume data can handle subcortical regions (possibly because it calculates Euclidean distances?). Otherwise, why not use the simpler method of spherical rotations in surface space?
I’m interested in your response because my recent research also requires generating spatial nulls for parcellated volume data (as seen in fetch_spatial_nulls.py). However, I used a 2mm resolution (it takes about several tens of minutes to compute around 100 brain regions on an Intel i9-13900KF CPU without parallel processing) because 1mm resolution is too time-consuming, and the results obtained with 2mm resolution are not significantly different from those obtained with 1mm.
In my understanding, the difference between spatial nulls and original data values is what distinguishes it from simulations generated by random sampling.
I’m not sure if I understand that correctly. Could you clarify this again? I also opened a github issue or this question, but maybe you already know why this is happening?
Additionally, the reason for using this method to generate spatial nulls is that volume data can handle subcortical regions (possibly because it calculates Euclidean distances?). Otherwise, why not use the simpler method of spherical rotations in surface space?
Yes, the reason I can’t use spherical rotations is that my statistical image also includes subcortical regions. But my project is also generally “voxel-based” (e.g., my first-level analyses and all sorts of other code sections assume that I’m working with voxel-based images). Long story short, if the analysis of subcortical regions was not part of my research question, I would have opted for surface-based approaches. That would make it easier to create null models with spherical rotations. But it’s too late for that now, and I can’t just switch to a surface-based approach without rewriting a lot of code and reimplementing a lot of things. Another option would be to convert my voxel-based statistical image to surface-based one, create the null models, and convert them back to voxel-based images. But that seems strange and is also impossible because of the subcortical regions.
However, I used a 2mm resolution (it takes about several tens of minutes to compute around 100 brain regions on an Intel i9-13900KF CPU without parallel processing) because 1mm resolution is too time-consuming, and the results obtained with 2mm resolution are not significantly different from those obtained with 1mm.
The above code took about 15 hours on my computer (Intel(R) Xeon(R) Gold 6248R). It would probably be helpful if the developers could give an example of how long it takes. I initially thought I had a bug somewhere, that’s why I opened this issue.
Justine Hansen answered the question on Github
Hi @JohannesWiesner ,
I have read the issues you posted on GitHub (two in neuromaps and one in brainsmash ). I’m not very familiar with some basic statistical methods for neuroimaging, but your comments have taught me a lot. Thus, I would like to ask a few additional questions:
- Since my work is related to imaging transcriptomics, I only need to generate null maps for a single parcellated imaging data (similar to the cortical + subcortical map). I’m glad that my comment aligns with Dr. Hansen’s answer.
- I’m very interested in your study design. I’m not sure if my understanding is accurate: first, you binarize an ROI/atlas/parcellation to specify certain regions. Then, you correlate subjects’ imaging data within this brain area with their behavioral variables. The goal is to demonstrate that the correlation is specific to the regions you chose, rather than a random effect. Based on this, you initially attempted to generate spatial nulls for the binarized image. Dr. Josh recommended creating the null maps before binarization (Create surrogate maps for binary images?), leading to the current discussion. However, I’m confused as to why you generated spatial nulls for a T statistic map in your code. If the T value is smaller in the null maps, does it indicate that the effect is unlikely to be due to chance? To achieve your research goal, my initial thought was to generate null maps for each subject’s parcellated images and use them to perform statistical tests to demonstrate that the results are not due to random effects. Could you kindly let me know how you ultimately implemented your statistical hypothesis?
Apologies for the additional questions, and I look forward to your response.