Access host-machine SGE or SLURM scheduler directly from a singularity container?

Suppose I were to design a pipeline that wraps around a containerized application such as fmriprep, tedana, or tractoflow.

Part of my scripts will run the contanerized application using a system call to either qsub or sbatch, while others may aggregate data using external libraries, such as pandas.

For managing external libraries, I know I can use solutions like pyenv or conda for python; however I was wondering if I might be able to manage my libraries using another singularity container. The caveat is I would need to be able to still have access to the host-scheduler from inside this secondary container. Is this possible, or would I need to stick to solutions like pyenv and conda?

Hi,

I am a bit confused as to what the problem is. You can have a bash script submitted via qsub/sbatch that contains your singularity run and singularity exec $command statements as well as your python $python_script commands that will use your python distribution in Anaconda. You should not be making any changes to the libraries inside containers, as that defeats their purposes. Perhaps there is an issue I am not understanding?

Best,
Steven