I would like to use a GPU-able node in Nipype, Bedpostx with use_gpu=True, and I was wondering if:
- there was a global option to indicate Nipype which GPU id to use for all GPU-able nodes?
- and/or there was an option at the node level (in this case Bedpostx) [maybe overriding the global one if it exists]?
I don’t think such functionality exists in Nipype, but it could be an interesting addition.
Thank you Chris, I am familiar on how to specify Nvida GPU ids in case of CUDA based applications, but I have no idea on how to do it for given command-line applications. Would you know a general way on a Debian system (if that matters)? For bedpostx_gpu I just asked on the FSL mailing list so I can post the response here as soon as I get one.
In Nipype you can set the env input for individual Nodes to specify CUDA_VISIBLE_DEVICES
I had come accross this issue long ago and there was no existing functionality in nipype. thats why I extended the multiproc plugin to utilize the GPUs the same way as CPUs are used for the nodes running on GPU. Thanks @ChrisGorgolewski for his suggestion to extend the plugin. I am using all available GPUs via the multiproc plugin https://github.com/schahid/nipype/blob/master/nipype/pipeline/plugins/multiproc.py
that works fine for the cuda binaries but bedpostxgpu script you have to modify to work it according your needs. How much is the memory of your GPU card, speed? and many GPU processes you can start on your GPU card? according to this you need to modify the bedpostx_gpu script that wraps up the xfibers_gpu binary.
Thanks a lot for this info. At the moment I have two GPUs, a low memory one I am using only for the basic OS functions (2GB RAM) and a high memory one (12 GB) I would like to use for GPU-based processes such as bedpostx_gpu.
Considering this configuration, would you have any advice on how to modify the bedpostx_gpu script?
The bedpostx_gpu script uses a batch system which you need to change according to your enviroment.
e.g the qsys option needs to be modified to start the binaries with or without using a batch system. another option is njobs, that tells how many xfibers_gpu jobs can be started at the same time. if you use more than one xfiber job on single GPU then at the bottom of the fsl_sub script, change the line (line 508) $line to $line &.