Command-and-control TUI for parallel multi-LLM agent fleets.
Engineering context: Solves the problem of managing parallel AI agent sessions across providers without losing cost visibility or prompt quality. Draws on multi-cloud infrastructure patterns — the same fleet management, circuit breaking, and cost enforcement that production platforms need.
Orchestrates Claude Code, Gemini CLI, OpenAI Codex CLI, Cline CLI, and experimental Google Antigravity handoffs from a single k9s-style interface. Built with Charmbracelet (BubbleTea + Lip Gloss).
- Orchestrate multiple LLM providers (Claude, Gemini, Codex, Cline, Antigravity) with capability-aware session management
- Project provider-neutral fleet roles into native Codex, Claude, Gemini, and Cline instruction surfaces
- Generate repo-native Antigravity and Gemini workflow surfaces from the canonical Ralph workflow catalog
- Discover ralph-enabled repos across your workspace (
--scan-path) - Monitor live status: loop iteration, circuit breaker state, per-provider costs, model selection
- Control ralph loops, headless sessions, and Codex planner/worker loops from TUI or MCP tools
- Track costs across all providers in a unified cost ledger
- Stream logs in real-time with reactive file watching (fsnotify)
- Configure
.ralphrcsettings per repo from an in-TUI editor
go install github.com/hairglasses-studio/ralphglasses@latestOr build from source:
git clone https://github.com/hairglasses-studio/ralphglasses.git
cd ralphglasses
# Bootstrap local tooling if needed
./scripts/bootstrap-toolchain.sh
# Build
./scripts/dev/go.sh build ./...
# Launch TUI
./scripts/dev/go.sh run . --scan-path ~/hairglasses-studioRepo-local Codex MCP discovery is already wired in via .codex/config.toml and .mcp.json. Other MCP clients can register the same ./scripts/dev/run-mcp.sh --scan-path ~/hairglasses-studio entrypoint manually if needed.
Launch sessions against any supported provider:
| Provider | CLI | Default Model | Install |
|---|---|---|---|
codex (default) |
Codex CLI | gpt-5.4 |
npm i -g @openai/codex-cli |
claude |
Claude Code | sonnet |
Pre-installed |
gemini |
Gemini CLI | gemini-3.1-pro |
npm i -g @google/gemini-cli |
cline |
Cline CLI | Cline-managed free tier | npm i -g cline |
antigravity |
Google Antigravity | Antigravity-managed | Install Antigravity locally |
Antigravity is launch-only from Ralph. It opens a new Antigravity agent window rooted at the repo and relies on checked-in .agents/rules/, .agents/workflows/, .agents/skills/, .mcp.json, and the generated .gemini/extensions/ralphglasses-workspace/ bundle instead of participating in Ralph's streaming session runtime, teams, loops, or failover.
OPENAI_API_KEY=sk-... # Codex CLI
GOOGLE_API_KEY=AIza... # Gemini CLI
ANTHROPIC_API_KEY=sk-ant-... # Claude Code (optional if using OAuth)Prompt-enhancement and embedding helpers can optionally use a local Ollama-compatible endpoint:
Local prompt-improver evals can now be regression-tested from the repo root with:
~/hairglasses-studio/dotfiles/scripts/hg-promptfoo.sh . eval -c promptfoo/promptfooconfig.yamlThat promptfoo suite is intentionally fast and deterministic. The provider-specific
improvement paths stay covered by the Go tests in internal/enhancer and
cmd/prompt-improver.
When you want those prompt-improver evals or CLI runs to emit traces to an OTLP
collector or Langfuse, export either the standard OTLP env vars or the
Langfuse-native trio before running ralphglasses or go run ./cmd/prompt-improver:
# Generic OTLP/HTTP or OTLP/gRPC
OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4318
OTEL_EXPORTER_OTLP_HEADERS=authorization=Bearer demo-token
# Or Langfuse-native OTLP/HTTP derivation
LANGFUSE_HOST=https://cloud.langfuse.com
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...The public verification path is intentionally layered so local usage, docs, and CI exercise the same front door:
# Full repo gate
make ci
# Public smoke path: help/startup, doctor, discovery, scaffold, validate, repo health, automation
bash scripts/dev/public_smoke.shThe public smoke path is also wired into public-smoke.yml.
If you want to keep Codex busy across each 5-hour subscription window instead of running one-off marathons, enable the repo-local automation controller and feed its queue:
# Enable the default Codex saturation policy for the current repo.
bash scripts/dev/codex-window.sh enable --repo .
# Queue a roadmap-driven cycle for the next available window.
bash scripts/dev/codex-window.sh enqueue-cycle \
--repo . \
--objective "Burn down the highest-value open roadmap tranche" \
--criteria "make ci passes,ROADMAP.md reconciled" \
--max-tasks 3
# Inspect policy and queue state.
bash scripts/dev/codex-window.sh status --repo .Codex automation now defaults to a 5-hour reset cadence (0 */5 * * *), a 98% target-utilization goal, and auto-resume pacing that parks sessions when the provider reports subscription exhaustion and brings them back after reset. Run ralphglasses serve --automation on your workspace to keep the queue draining in the background.
.agents/skills/is canonical for provider-neutral workflows..agents/roles/*.jsonis canonical for reusable fleet roles..codex/agents/*.toml,.claude/agents/*.md,.gemini/agents/*.md, and.clinerulesare native projections of that shared role catalog..mcp.jsonis the shared MCP command source of truth;.cline/mcp.jsonmirrors that entrypoint for Cline..agents/workflows/and.agents/rules/are the repo-native Antigravity/Gemini command surfaces generated from the live Ralph workflow catalog and repo guidance..gemini/commands/ralph/*.tomlis the generated local command mirror for Gemini and Antigravity..gemini/extensions/ralphglasses-workspace/is a thin generated extension bundle that wraps the repo-local MCP server, commands, andAGENTS.md.- Gemini parity is native-first: local
.gemini/agents/*.mdsubagents, remote A2A agents, skills, and extensions all participate in the shared fleet model.
AGENTS.mdis the canonical repo instruction file..agents/roles/*.jsonis the canonical reusable role catalog..agents/skills/is the canonical workflow catalog..mcp.jsonis the canonical MCP server command manifest..codex/agents/*.toml,.claude/agents/*.md,.gemini/agents/*.md, and.clinerulesare generated compatibility projections.ROADMAP.mdis the canonical roadmap; roadmap-derived exports and checkpoints are generated views, not the source of truth.
Cross-platform Unix TUI that manages multi-session, multi-provider LLM loops from any terminal.
Arch-family thin client image (Manjaro today), boots into Sway or Hyprland + ralphglasses TUI. Supports 7-monitor, dual-NVIDIA-GPU setups; Ubuntu 24.04 + i3 is kept as a legacy rollback path.
See ROADMAP.md for the full plan.
258 MCP tools for programmatic control across all providers:
| Tool | Description |
|---|---|
ralphglasses_scan |
Scan for ralph-enabled repos |
ralphglasses_list |
List all repos with status |
ralphglasses_status |
Detailed status for a repo |
ralphglasses_start |
Start a ralph loop |
ralphglasses_stop |
Stop a ralph loop |
ralphglasses_stop_all |
Stop all managed loops |
ralphglasses_pause |
Pause/resume a loop |
ralphglasses_logs |
Get recent log lines |
ralphglasses_config |
Get/set .ralphrc values |
| Tool | Description |
|---|---|
ralphglasses_session_launch |
Launch a session (provider: codex/claude/gemini/cline/antigravity) |
ralphglasses_session_list |
List sessions (filter by provider, repo, status) |
ralphglasses_session_status |
Detailed session info (provider, cost, turns, model) |
ralphglasses_session_resume |
Resume a previous session (codex/claude/gemini/cline, if the installed CLI supports resume) |
ralphglasses_session_stop |
Stop a running session |
ralphglasses_session_budget |
Get/update budget for a session |
ralphglasses_loop_start |
Create a Codex gpt-5.4 planner/worker/verifier loop |
ralphglasses_loop_status |
Inspect persisted loop status and iteration history |
ralphglasses_loop_step |
Run one planner/worker/verifier iteration in a git worktree |
ralphglasses_loop_stop |
Stop a loop and block future iterations |
| Tool | Description |
|---|---|
ralphglasses_team_create |
Create team with provider for lead session |
ralphglasses_team_status |
Get team status and progress |
ralphglasses_team_delegate |
Add a task to an existing team |
ralphglasses_agent_define |
Create/update agent definitions |
ralphglasses_agent_list |
List available agent definitions |
| Tool | Description |
|---|---|
ralphglasses_roadmap_parse |
Parse ROADMAP.md into structured JSON |
ralphglasses_roadmap_analyze |
Compare roadmap vs codebase |
ralphglasses_roadmap_research |
Search GitHub for relevant repos/tools |
ralphglasses_roadmap_expand |
Generate proposed roadmap expansions |
ralphglasses_roadmap_export |
Export tasks as rdcycle/fix_plan specs or docs-ready tranche checkpoints |
| Tool | Description |
|---|---|
ralphglasses_repo_scaffold |
Create/init ralph config files for a repo |
ralphglasses_repo_optimize |
Analyze and optimize ralph config |
main.go → cmd/root.go (Cobra CLI)
├── internal/discovery/ Scan for .ralph/ repos
├── internal/model/ Status, progress, config parsers
├── internal/process/ Process management, file watcher, log tailing
├── internal/session/ Multi-provider LLM session management
│ ├── providers.go Per-provider cmd builders + event normalizers
│ ├── runner.go Session lifecycle (launch, stream, terminate)
│ ├── manager.go Session/team registry
│ ├── budget.go Per-provider cost tracking + enforcement
│ └── types.go Provider enum, Session, LaunchOptions, TeamConfig
├── internal/mcpserver/ MCP tool handlers (258 tools: 253 grouped + 5 management, stdio)
├── internal/roadmap/ Roadmap parsing, analysis, research, export
├── internal/repofiles/ Ralph config scaffolding and optimization
├── internal/tui/ BubbleTea app, keymap, commands, filter
│ ├── styles/ Lip Gloss theme (k9s-inspired)
│ ├── components/ Table, breadcrumb, status bar, notifications
│ └── views/ Overview, repo detail, log stream, config editor, help
├── distro/ Thin client build system
│ ├── hardware/ Hardware manifests (PCI IDs, modules)
│ ├── scripts/ Build and detection scripts
│ ├── systemd/ Systemd service units
│ └── pxe/ PXE network boot docs
├── docs/ Research & reference docs
└── scripts/ Shell helpers (marathon.sh)
- Claude Code: Overview | CLI Reference | SDK
- Anthropic API: API Reference | Tool Use
- Gemini: API Overview | Models | Gemini CLI
- OpenAI: API Reference | Codex CLI | Models
- MCP (Model Context Protocol): Specification | Go SDK (mcp-go)
- Charmbracelet: Bubble Tea | Lip Gloss | Bubbles
- Autobuild Planning: docs/autobuild-patch-queue.json / docs/autobuild-cycle-improvement-notes.md / docs/autobuild-execution-ledger.json
- ROADMAP.md — Full development roadmap
- docs/CODEX-REFERENCE.md — Codex-first runtime notes, pinned docs, Claude cache guardrails
- docs/CODEX-PARITY-STATUS.md — Codex parity closeout state and future-session rules
- docs/PROVIDER-SETUP.md — provider setup plus Antigravity launch-only constraints
- docs/PROVIDER-PARITY-OBJECTIVES.md — provider capability matrix and Antigravity parity boundaries
- docs/CLI-PARITY.md — CLI-to-MCP and skill-backed parity matrix
- docs/RESEARCH.md — Agent OS & sandboxing research
- docs/MULTI-SESSION.md — Multi-session tool comparison
- CLAUDE.md — Claude Code project instructions
- GEMINI.md — Gemini CLI project instructions
- AGENTS.md — Codex CLI project instructions
- CONTRIBUTING.md — Multi-provider contribution guide
- docs/issue-ledger.json — Current repo issue ledger