Run Python functions in persistent, warm subprocesses inside isolated virtual environments.
- Microservice-like boundaries for Python code on a single machine, without deploying a service.
- Cleanly separate libraries with diverging dependencies by running them in dedicated virtual environments.
- Keep worker processes warm to avoid repeated heavy imports and initialization costs.
- Simple, local orchestration: your application calls functions; workers execute them in isolated subprocesses via JSONL over stdio.
- Great for: side-by-side versions (e.g.,
numpyv1 and v2), incremental upgrades, and safer experimentation.
- Stdlib-only core, optional
uvsupport if installed - Pyproject-first dependency resolution, fallbacks to requirements or inline packages
- JSONL protocol over stdio; simple
PoolAPI with timeouts and restarts - Context manager, async and batch calls; inflight concurrency control
- Autoscaling workers by default (
workers="auto"), fixed-size workers optional
- Trusted code only; no sandboxing or privilege dropping.
- Return values are sent as JSONL. venvmux applies a best-effort encoder so your worker functions don’t need decorators or library changes:
- Dataclasses → dict
- Enums → value (fallback: name)
- datetime/date → ISO-8601 string
- Decimal/UUID/Path → str
- bytes →
{ "__bytes__": true, "b64": "..." }(base64) - set/tuple → list
- Optional: numpy.ndarray → list, pandas.DataFrame/Series → dict (if installed)
- Duck-typed:
.model_dump()/.dict()/__json__()if available - Unknown objects →
repr(obj)
- Large results: optionally spill to a temp file to avoid large JSON frames.
- Requires Python 3.11+ (CPython). Supported platforms: macOS, Linux, Windows.
- Normal install:
pip install venvmux
- With speed extra (optional; enables faster serialization via orjson if available):
pip install "venvmux[speed]"- Behavior: when
Pool(serializer="orjson")is set, workers attempt to useorjson; if not installed, they gracefully fall back to stdlibjson.
- Behavior: when
- Editable for local dev:
or with pip:
uv pip install -e .python -m pip install -e . - Editable with dev extras (contributors; installs pytest/ruff/mypy/pre-commit tooling):
uv pip install -e .[dev] # or python -m pip install -e .[dev]- You can combine extras, e.g.
.[dev,speed]to install both tooling and the speed extra.
- You can combine extras, e.g.
- Create worker modules that expose plain Python functions (no decorators required)
# file: yourpkg_v1/entry.py (module loaded by the worker process)
def compute(n: int) -> dict:
return {"version": "v1", "n": n}
# file: yourpkg_v2/entry.py (module loaded by the worker process)
def compute(n: int) -> dict:
return {"version": "v2", "n": n}- Call those functions from your application using named environments
# file: main.py (your application)
from venvmux import EnvSpec, Pool
pool = Pool.from_envs(
{
"v1": (EnvSpec(workers=1), "yourpkg_v1.entry"),
"v2": (EnvSpec(workers=2), "yourpkg_v2.entry"),
}
)
pkg_v1 = pool.venv("v1")
pkg_v2 = pool.venv("v2")
print(pkg_v1.compute(n=1))
print(pkg_v2.compute(n=2))
pool.close()- The
worker_module(hereyourpkg_v1.entryandyourpkg_v2.entry) must be importable in the worker process.- If your project has a
pyproject.toml, venvmux will (by default) build an isolated venv and install your project there. - For local development, you can reuse the current interpreter:
EnvSpec(python=sys.executable)and ensure your packages are onPYTHONPATHor useworker_paths.import sys from venvmux import EnvSpec EnvSpec(python=sys.executable, worker_paths=["/abs/path/to/src"]) # example
- If your project has a
- Use
pool.close()to fully stop all environments and background threads;close()is an alias forstop(). - To stop only a single environment without tearing down the pool, call
pool.stop_env("env_name"). - To gracefully restart all workers for one environment while keeping sizing, use
pool.reload_env("env_name"). pool.stop(name="env_name")is also supported to stop a specific env.
from venvmux import EnvSpec, Pool
with Pool.from_envs({"env": (EnvSpec(workers=2), "yourpkg_v2.entry")}) as pool:
# async
fut = pool.call_async("compute", {"n": 21}, env="env")
print(fut.result())
# batch
results = pool.venv("env").map("compute", [{"n": i} for i in range(3)], max_workers=2)
print(results)EnvSpec.inflightlimits concurrent in-flight calls per worker process.
- By default,
EnvSpec.workers = "auto"and the pool autoscales:- Starts with a small number of workers (min 1 by default).
- Grows when all current workers are saturated (up to
max_workers, default: max(2, CPU count)). - Scales down after inactivity (
scale_down_idle_s, default: 120s) back to at leastmin_workers.
- Tuning knobs on
EnvSpec(optional):min_workers: minimum warmed workers when using automax_workers: maximum workers when using autoautogrow: enable/disable growth even ifworkers="auto"scale_down_idle_s: idle time before scaling downsaturation_window_len: number of recent checks before growing (default 5)scale_up_cooldown_s: minimum seconds between scale-ups (default 2.0)
- Fixed-size mode: set an integer, e.g.
EnvSpec(workers=4), to always run exactly 4 workers (autogrow disabled).
min_workers/max_workers: bounds for auto mode.inflight: concurrent in-flight calls per worker process.autogrow: enable/disable growth whenworkers="auto".scale_down_idle_s: idle time before shrinking back down.saturation_window_len: number of recent checks to consider.scale_up_cooldown_s: minimum seconds between scale-ups.
- Growth: requires sustained saturation (recent majority of checks show all workers at inflight limit).
- Shrink: after
scale_down_idle_sof inactivity, oldest workers stop untilmin_workersremain.
- You can set
Pool(max_payload_bytes=...)to spill large payloads to a temp file automatically.
- You can set
Pool(max_result_bytes=...)to spill large results to a temp file automatically. The worker returns a small manifest; the pool reads and returns the actual data transparently.
- Default serializer is stdlib
json. For faster serialization, setPool(serializer="orjson"). The pool passesVENV_MUX_SERIALIZER=orjsonto workers; they gracefully fall back tojsoniforjsonis unavailable.
- Control via env vars:
VENV_MUX_LOG=DEBUG(or INFO/WARN/ERROR)VENV_MUX_LOG_JSON=1to emit JSON logsVENV_MUX_LOG_FORMATto customize text logs
- Calls raise subclasses of
CallErrorwith a structuredcodeand optionaltraceback:NotFoundError: function not found in worker moduleInvalidJsonError: worker received invalid JSON inputRemoteExceptionError: remote function raised; includes traceback
- venvmux uses a simple JSONL protocol with a shared
PROTOCOL_VERSION. The pool validates the worker's reported version during startup and raises a clear error if there is a mismatch.
See Quickstart above for the preferred multi-env usage pattern.
- Priority: pyproject.toml (uv.lock + uv sync if available) > requirements files > inline packages
- Venv path:
~/.venvmux/envs/{hash}/ - Hash inputs: interpreter/version, files content, inline packages, platform/arch
- Also includes
PIP_INDEX_URL,PIP_EXTRA_INDEX_URL, andUV_INDEXif set
- Default home:
~/.venvmux(envs under~/.venvmux/envs/{hash}) - Override via
Pool(home="/path/to/.venvmux")or environment variableVENV_MUX_HOME - Cleanup: stop the pool, then you can safely remove stale env directories under the home
- If your worker module is not importable by default, set
EnvSpec(worker_paths=["/abs/path/to/src"])to extend the worker'sPYTHONPATH. - Security note: worker paths are trusted; only point to directories you control.
EnvSpecPoolVenvHandle
- For trusted code only; no sandboxing.
- If
uvnot available, the library usespip. - Set
VENV_MUX_LOG=DEBUGfor verbose logs. - If using custom indices, note hashing includes index URLs; changing them creates a new env.
- Unit tests mock installers and rely on the local interpreter.
- See
examples/two_envs_demo.py - See
examples/typed_remote_demo.py