Nipype GPU workflow customizations for multicores cpus and single or double gpu


#1

Is it possible to customize a nipype workflow in such a way that all the mapnodes run on the multicores cpus using multiproc plugin but when a node is running over a gpu, only two or three instances of this node should run. e.g if there are 20 cores but only two GPUs, so the mapnode can run in parallel on available cores, but the node running the code on gpu should limit the number of parallel executions to only two or three? (as setting run in serial for a mapnode will run only one process).
Any suggestions to utilize all the cores and gpus to its maximum allowable load?


#2

I think you would have to extend the MultiProc plugin to track GPUs as resources.


#3

@ChrisGorgolewski HI, I have attempted to modify the multiproc plugin for mixed GPU/CPU scheduling. Can you please have a look at here: https://github.com/schahid/nipype/blob/master/multiproc.py
its for nipype version 0.13.1 and will work with nodes distribution on cpus and gpus.
is it the only way I could extend the multiproc plugin or is there a better way one can do to use mixed gpu/cpu scheduling?
Looking forward for your feedback.
Thanks a lot,


#4

Awesome! Please send a pull request on github - someone will take it from there.

More details how to do it: https://github.com/nipy/nipype/blob/master/CONTRIBUTING.md


#5

Hi, I tried to fork the nipype project but it is cloning the recent master branch. Can you tell me how to fork the 0.13.1 branch?


#6

I can, but you should work of the master branch - otherwise it will be impossible to merge your changes. Why would you like to work of 0.13.1 branch?


#7

Hi,
I worked on that branch 0.13.1 because that was installed with me, because of dependencies problems in my virt.env.
Now I pulled the latest master branch and modified the multiproc plugin. Then testing with my non-gpu and gpu based workflows (used iterables on my input nodes on 3 GPUs and 6 cpu cores) went fine. Now I created a new pull request.
Thanks a lot.


#8

Mohammad do you have any workflows that use GPU/CPU in combo that you could share? I want to look into migrating some of my workflows to probtrackx2_gpu …


#9

Hi,

Using multiple GPUs together with CPUs in the same nipype workflow, you would need to use the modified multiproc plugin (https://github.com/schahid/nipype-multiproc) and put it inside the …/lib/python2.7/site-packages/nipype/pipeline/plugins folder.

Then you run your workflow with providing one extra plugin arg to your workflow as:

pipeline.run(
plugin=‘MultiProc’,
plugin_args={‘n_procs’ : args.processes, ‘n_gpus’: args.ngpus, ‘ngpuproc’: args.ngpuproc}
)

  • n_procs is the default arg for specifying how many CPU processes to run at the same time
  • n_gpus is to tell the workflow how many gpus to use,
  • ngpuproc argument is for how many processes(tasks) to run on one single gpu.

The nipype node which is running on the GPU must have set the inputs.use_gpu=True or inputs.use_cuda=True.

if you want to run bedpostx_gpu script before probtrackx2, you also need to modify that script, which is an FSL submit script, there you need to set njobs to the number of xfibers jobs that should run on a single GPU.

Best regards,


Shahid.


#10

Thanks this is all extremely helpful! Did you create a separate interface for probtrackx2_gpu and bedpostx_gpu or did you just create a symlink and/or change your nipype config to use probtrackx2_gpu instead of probtrackx…

Thanks again…


#11

Hi,

No, i didn’t create separate interfaces for them, but used the default bedpostx5_gpu and i dont know if there is one for the probtrackx2_gpu in the newer version from nipype. if there is no interface for the moment, you can create your custom one for it, and set the input field ‘use_cuda’ or ‘use_gpu’ so that the plugin considers that as a GPU requiring task and not a task that will run on the CPUs.