Hi,
In gingerALE I would use the following settings if I would analyse a single dataset: cluster-level FWE value of 0.05, permutations 10000, p value of 0.001 which would mean that I would be using a cluster-based FWI correction using 10k permutations with a cluster forming threshold of p<0.001 and a cluster level correction of p<0.05.
However doing the above analysis in gingerALE is a bit of a nightmare as it takes forever and therefore I would like to use NiMARE to do a similar analysis. I would really appreciate a short sanity check to ensure that I’m doing what I think I’m doing.
My current pipeline consist of the following steps:
- Use convert_sleuth_to_json() to get my sleuth formatted text file to a json format
- Fit the json dataset with nimare.meta.cbma.ale()
- Apply family-wise error rate correction with nimare.correct.FWECorrector(method= ‘montecarlo’, voxel_thresh = 0.001, n_inters= 10000, n_cores=7)
- Save all the resulting maps with cres.save_maps()
- Extract the cluster table using nilearn.reporting.get_cluster_table(the resulting zmap, 3.291)
If I understand the boiler text in NiMARE I would basically end up with the following text to describe my analysis (besides providing a link to the actual notebook):
A cluster-forming threshold of p < 0.001 was used to perform cluster-level FWE correction. 10.000 iterations were performed to estimate a null distribution of cluster sizes, in which the locations of coordinates were randomly drawn from a gray matter template and the maximum cluster size was recorded after applying an uncorrected cluster-forming threshold of p < 0.001. The negative log-transformed p-value for each cluster in the thresholded map was determined based on the cluster sizes.
Any steps that I’m missing in my NiMARE pipeline so that I would end up with an analysis similar to the gingerALE analysis as I described in the beginning?
Thanks a lot for any thoughts,
Max