Thank you Chris, I am familiar on how to specify Nvida GPU ids in case of CUDA based applications, but I have no idea on how to do it for given command-line applications. Would you know a general way on a Debian system (if that matters)? For bedpostx_gpu I just asked on the FSL mailing list so I can post the response here as soon as I get one.
I had come accross this issue long ago and there was no existing functionality in nipype. thats why I extended the multiproc plugin to utilize the GPUs the same way as CPUs are used for the nodes running on GPU. Thanks @ChrisGorgolewski for his suggestion to extend the plugin. I am using all available GPUs via the multiproc plugin https://github.com/schahid/nipype/blob/master/nipype/pipeline/plugins/multiproc.py
that works fine for the cuda binaries but bedpostxgpu script you have to modify to work it according your needs. How much is the memory of your GPU card, speed? and many GPU processes you can start on your GPU card? according to this you need to modify the bedpostx_gpu script that wraps up the xfibers_gpu binary.
Thanks a lot for this info. At the moment I have two GPUs, a low memory one I am using only for the basic OS functions (2GB RAM) and a high memory one (12 GB) I would like to use for GPU-based processes such as bedpostx_gpu.
Considering this configuration, would you have any advice on how to modify the bedpostx_gpu script?
The bedpostx_gpu script uses a batch system which you need to change according to your enviroment.
e.g the qsys option needs to be modified to start the binaries with or without using a batch system. another option is njobs, that tells how many xfibers_gpu jobs can be started at the same time. if you use more than one xfiber job on single GPU then at the bottom of the fsl_sub script, change the line (line 508) $line to $line &.