The error message I received was ‘MKDAChi2’ object has no attribute ‘results.’ To address this, I modified the code from mkda.fit(dset_sel, dset_unsel) to mkda.results = mkda.fit(dset_sel, dset_unsel). This resolved the error, but the line of code seems to run for an extended period, approximately six to seven hours.
I have a couple of questions:
Is my modification to the code correct, changing from mkda.fit(dset_sel, dset_unsel) to mkda.results = mkda.fit(dset_sel, dset_unsel)?
Is the runtime of six to seven hours normal for the given configuration: Macbook Pro 2017, 2.3GHz Dual-core Intel Core i5? Approximately how much time does it typically take to complete the entire process?
I would be grateful for any insights or suggestions regarding this matter. Thank you in advance for your time and assistance.
It looks like you’re following an older tutorial. I would recommend matching the documentation you follow with the version of the software you’re using. The .results attribute was removed from MetaEstimator objects in version 0.0.12, so you must have a NiMARE version >=0.0.12 installed.
I wouldn’t modify the MetaEstimator object. Instead, I would recommend creating a new variable results (e.g., results = mkda.fit(dset_sel, dset_unsel)).
That sounds about right. Running MKDAChi2 Monte Carlo permutations will, unfortunately, take a long time. You can increase the number of cores you use (n_cores) to speed things up. I can’t predict how long the whole process should take, but two recommendations I would make are (1) increase the number of iterations to 10000 for a publication-quality meta-analysis and (2) run it on a high-performance cluster.
@Yaya_Jiang Yes, this new version should speed things up about 3x on a single processor, but also reduces memory by around 20x, which means you can run on more cores in parallel even on a laptop. Try it!
I would also say that for first pass results, you could use FDRCorrector, which would be almost instant.
For montecarlo, there’s just no getting around that even something that takes 20s x 10,000 iterations = 55 hours of computation (single core), unfortunately. So I would also reccomend a HPC for “final” publication level results.