Nimare -- allocating resources to avoid memory crashes

Hi Neurostars,

I’m having difficulty avoiding system crashes while running multiple Nimare functions. I don’t have a good grasp on the best way to allocate memory resources across multiple cores to avoid running into this problem. I’ve run into crashes while running MKDA Chi2 with FWE correction at this step:

corr = nimare.correct.FWECorrector(method=“montecarlo”, n_iters=10000)
cres = corr.transform(mkda.results) # CRASH HAPPENS HERE

And while identifying studies with coordinates in an ROI in the decoding workflow:
ids = dset.get_studies_by_mask(mask_img)

Is there a modifier flag I should be using? I’ve had problems of two different 2020 iMacs, one running Catalina (10 core, 256GB), the other Big Sur (10 cores, 128GB). Thanks in advance!

It’s not really well-documented, but there’s a memory_limit argument you can use in many of NiMARE’s classes at initialization. It accepts a string indicating the limit, like '1gb'. I don’t remember if it works within the FWECorrector, but if it doesn’t that’s probably something for us to add.

This seems weird to me, since this method isn’t normally memory-intensive. Can you share the full traceback for this one? I assume you’re running this with the full Neurosynth database?

Hi Taylor!

I will give the memory_limit argument a try. Is the limit per core, or over cores, e.g., if I wanted to allot 16GB of memory for each of two CPU cores (total memory=32) would I set flags so that:

ncores=2, memory_limit=‘16gb’ or ncores=2, memory_limit=‘32gb’

Unfortunately, I don’t have a traceback for the decoding step. I was copying code line by line and it got hung up on this step. I left my workstation to do something else and came back and the computer had restarted itself. I will try running the statement again.

Sorry for the delay. After digging further, I don’t think that the FWECorrector will use memory_limit. Unfortunately, we also haven’t figured out how to combine parallelization with memory-mapped arrays, so memory_limit and n_cores won’t work together.

I have a few thoughts, though.

  1. My best recommendation is to run on a cluster rather than your laptop.
  2. I have opened a PR to better manage some of the variables in the meta-analytic estimators ( It probably won’t make a huge difference, but it should help.
  3. For the get_studies_by_mask call, you could try splitting up the dataset before calling it, and then compiling the IDs when you slice the dataset. Something like:
mask_ids = []
for i_bunch in range(len(dset.ids) // 500):
    temp_dset = dset.slice(dset.ids[500*i_bunch:500*(i_bunch+1)])
    temp_ids = temp_dset.get_studies_by_mask(mask_img)
    mask_ids += temp_ids

# For the remainder
last_ids = dset.ids[-(len(dset.ids) % 500):]
temp_dset = dset.slice(last_ids)
temp_ids = temp_dset.get_studies_by_mask(mask_img)
mask_ids += temp_ids

You should definitely check that code before you use it, but hopefully it’s fairly close to useable.

Regarding the decoding step- would you mind sharing the number of studies in dset, as well as the number of coordinates in dset (dset.coordinates.shape[0]) and the number of voxels in the mask?

I will also look into ways to use memory_limit in the FWE corrector, but it will probably require setting n_cores to 1, at least for now.