How to optimize parallelization of NiPype workflows including openmp nodes?

Hi,

I would like to ask questions very similar to this Neurostars post.

When one has access to a server with large number of cpus (and RAM) and would like to optimize workflows which include nodes themselves parallelizable with openmp (freesurfer, etc.), questions are:

  • what MultiProc plugin parameters other than n_procs should be considered for optimal execution?
  • in particular how to set properly the MultiProc plugin parameters related to setting the number of openmp threads for nodes using this feature in the worflow (freesurfer, afni, etc.)? Can one skip all node level configuration (e.g. openmp variable for Freesurfer ReconAll interface) and set everything via the MultiProc plugin? If not how should they be combined?
  • is there any suggestion between using a subject iterable within a worfklow VS running the workflow always on a single subject but parallelizing at the level of the server/scheduler
  • is it possible to have an example on how to set all these parameters?
  • is there anything else very important to know when trying to optimize NiPype workflow resources?
  • (optional) once the optimal parameters have been set, how (very roughly) does the optimization work when a workflow is iterated on subjects with nodes using openmp parallelization?

Any help would be great,

Michael

did you get the answer, Michael?

@michael @silver I am also super-curious if an answer ever came through for this question. I’m going to piggy-back on this questions with a similar question of my own:

I am trying to process the UK Biobank MRI data with mindboggle now using only 1 core with a mindboggle singularity image, like the below command, but as it’s running it greedily utilizes 70 cores of the HPC’s available CPUs. So: How can I prevent mindboggle from hogging all the CPUs?

singularity run -B ${DATA_DIR}:${DATA_DIR} nipy_mindboggle-2019-11-05-9bf2a92bbd17.simg mindboggle ${DATA_DIR}/${sub}/freesurfer/${sub} --out ${DATA_DIR}/${sub}/mindboggled

The singularity image was built from DockerHub like this:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v D:\path\to\singularity\image:/output --privileged -t --rm singularityware/docker2singularity nipy/mindboggle

Tagging some people involved in the mindboggle project here:
@binarybottle @satra