How to run distribute processing on a multicore machine using Nilearn?

Do anyone have examples to follow? As you know, MRI processing is always slow.
I didn’t see it in the tutorials on the nilearn website.

The simplest way to do such a thing is to defer the distributed processing to another tool.
For example, this would work:

  1. Write a script to process one dataset.
  2. Allow the script to take the name of the dataset as a command line parameter.
  3. Run the script in parallel over multiple datasets.
python myscript.py data1.nii.gz &
python myscript.py data2.nii.gz &
python myscript.py data3.nii.gz &
1 Like

Hi @YukunQu ,

In addition to @dangom’s suggestion (!), I’d note that Nilearn already includes capabilities for parallelization through the joblib library. This is what is used for the n_jobs parameter in many of the computationally expensive Nilearn functions – for example, permuted_ols.

Joblib can also be used to parallelize over larger analyses on distributed clusters, as in this example.

So, depending on exactly what level you’d like to parallelize your analyses, you likely have a few options available !

Hope that helps,

Elizabeth

3 Likes