Max_jobs : concurrent or queued?

Does the slurm plugin arg for max_jobs cap the number of running jobs or the number of queued jobs?
I believe its “running” only based on the docs and my own experiments.

I need something that caps the number of queued + running jobs because our cluster scheduler is massively overloaded, so too many jobs even in the queue is enough to cause nipype timeout errors and subsequent crashes.

Ref: “Added max_jobs paramter to plugin_args. limits the number of jobs
executing at any given point in time”

max_jobs limits queued + running

If the highest level workflow has this set (lets call it the meta contrast workflow), I assume this setting propagates to nested workflows. However, if I set the highest level to 10, and there are two inner workflows, each limited to 10, I can still have 20 jobs queued + running, right? max_jobs is jobs launched per workflow, not jobs launched as a total of all children workflows?