Run OpenCode in a network-locked container. All outbound traffic is routed through an enforcing proxy that applies the project's network policy.
See the main README for installation, architecture overview, and configuration options.
After running agentbox init (selecting "opencode") and starting the sandbox, configure your provider.
OpenCode is provider-agnostic. It supports many LLM providers (Anthropic, OpenAI, Google, and others) via API keys. You choose which provider to use at runtime.
Because the sandbox proxy only allows traffic declared in policy, you must add the appropriate provider service to your network policy. For example, to use OpenCode with Anthropic:
# .agent-sandbox/policy/user.agent.opencode.policy.yaml
services:
- claudeOr to use OpenCode with OpenAI:
services:
- codexAvailable provider services: claude (Anthropic), codex (OpenAI), gemini (Google), copilot (GitHub Copilot).
Edit the policy with agentbox edit policy; active-policy changes hot-reload automatically when the proxy is running.
Set the provider's API key environment variable before starting the container, or export it inside the container shell:
export ANTHROPIC_API_KEY=sk-ant-...
opencodeCredentials persist in a Docker volume (~/.config/opencode). You only need to do this once per project.
Inside the container:
opencodeThe sandbox image includes a baked config that grants all tool permissions (yolo mode). OpenCode uses a config-based permission system rather than a CLI flag.
The image sets these environment variables to prevent network calls that would be blocked by the proxy:
OPENCODE_DISABLE_AUTOUPDATE=true— prevents update checksOPENCODE_DISABLE_LSP_DOWNLOAD=true— prevents LSP binary downloads
Afterward, for CLI mode, stop the container:
agentbox compose down