I was reading in the pydra documentation about a class designed specifically for executing jobs within a container.
I was curious if similar logic has been implemented in nipype: i.e. can I easily run a single node in a container, rather than running the whole workflow from a specific container? Would I have to create a CommandInterface in order to do so?
You can make a python script that runs the workflow, and then use
singularity exec -e python $your_script. Just make sure you use a container that has nipype (such as fmriprep).
It looks like the idea of associating an interface with a singularity container has already been discussed on the nipype issue tracker. Hopefully we can make some headway in the future.
Dear @ajschadler ,
There is a trick we implemented for the neurodesk.org project: We wrote a small tool that automatically generates wrapper scripts for binaries inside a singularity container and makes them available as if the software was natively installed. Then nipype can call into the container without changing anything in nipype.
The tool is called GitHub - NeuroDesk/transparent-singularity: Deploying a singularity container so that it behaves like one would have installed software natively
Here is an example where we use it with Nipype and run the interface with binaries from within a container (e.g. submitting to an HPC job scheduler):