Problem
Once Kubernetes manifests exist (#68), teams with more than a trivial deployment will need to customize values across environments (dev, staging, prod): replica counts, resource limits, image tags, ingress hostnames, auth secrets, and backend configurations.
Without a Helm chart (or equivalent templating), every environment fork maintains its own copy of the manifests, and keeping them in sync becomes a manual process.
What varies across environments
| Parameter |
Dev |
Prod |
| Replicas |
1 |
3+ |
| Image tag |
latest / branch SHA |
semver release |
--insecure flag |
yes (no TLS) |
no (TLS at ingress) |
servers.json content |
test backends |
production backends with real tokens |
| Resource limits |
minimal |
tuned to observed usage |
| Ingress / hostname |
none or localhost |
mcp.example.com |
| Audit log storage |
ephemeral |
PersistentVolume |
| Auth provider config |
disabled or test tokens |
JWT/OIDC (#57) |
| ACL policies |
permissive |
strict per-role |
Why this matters
The MCP proxy is designed to be a shared infrastructure component — multiple AI clients (IDE extensions, CLI tools, agents) connect to a single proxy that manages backend pools. This "deploy once, connect many" pattern means the proxy deployment is high-leverage: getting it right matters, and getting it wrong affects every connected client.
Helm (or Kustomize overlays) is the standard way the Kubernetes ecosystem handles this. Without it, adopters either:
- Write their own Helm chart (duplicated effort across every team)
- Use raw manifests with
sed/envsubst (fragile, error-prone)
- Skip Kubernetes entirely and use docker-compose only (limits production viability)
Dependencies
Expected behavior
A Helm chart (or equivalent) in the repository that allows deploying the proxy with a single helm install / helm upgrade, with sensible defaults and clear documentation of available values.
Problem
Once Kubernetes manifests exist (#68), teams with more than a trivial deployment will need to customize values across environments (dev, staging, prod): replica counts, resource limits, image tags, ingress hostnames, auth secrets, and backend configurations.
Without a Helm chart (or equivalent templating), every environment fork maintains its own copy of the manifests, and keeping them in sync becomes a manual process.
What varies across environments
latest/ branch SHA--insecureflagservers.jsoncontentmcp.example.comWhy this matters
The MCP proxy is designed to be a shared infrastructure component — multiple AI clients (IDE extensions, CLI tools, agents) connect to a single proxy that manages backend pools. This "deploy once, connect many" pattern means the proxy deployment is high-leverage: getting it right matters, and getting it wrong affects every connected client.
Helm (or Kustomize overlays) is the standard way the Kubernetes ecosystem handles this. Without it, adopters either:
sed/envsubst(fragile, error-prone)Dependencies
Expected behavior
A Helm chart (or equivalent) in the repository that allows deploying the proxy with a single
helm install/helm upgrade, with sensible defaults and clear documentation of available values.