- ✅ You want visual monitoring of AI agents and model invocations
- ✅ You need atomic control over agent capabilities and permissions
- ✅ You want generative policies that adapt to new threats automatically
- ✅ You need real-time alerts when sensitive assets are accessed
- ✅ You want budget control over AI token usage and costs
- ✅ You need security detection for sensitive data, injections, and dangerous commands
- ✅ You want a unified dashboard to manage all your AI security policies
Users can configure their own "vault" and lock in Agents, Skills, credentials, and files they care about.
When someone touches these assets, the "Security Lobster" will notify you via IM: who touched what in your vault yesterday.
Technical Implementation:
- Event collection based on API gateway and file-side monitoring (invocation records, file access, change tracking)
- Supports periodic change notifications and real-time alerts
Fine-grained control at the Agent level, using composable "atomic capabilities" as the smallest unit:
- Agent interaction and invocation policies
- Model routing, whitelists, and quota control
- Security detection (sensitive info recognition, credential detection, prompt injection protection, etc.)
- File access permission constraints
Users can combine these atomic capabilities like "building blocks" to create reusable policy configurations.
Each "storage chamber" in the vault includes built-in basic security scenarios and allows users to add detection scenarios and Skills via natural language by mobilizing atomic capabilities.
Example:
Tell the system via chat interface:
For customer service Agent, if a user uploads a PDF containing 'contract',
it must first go through sensitive information desensitization,
and only GPT-4o-mini is allowed, with a single call limit of 2000 tokens.
The system will automatically generate and execute the corresponding policy rules.
- 🔍 Sensitive Data Detection — API keys, passwords, PII, credit cards, and 15+ pattern types
- 🛡️ Prompt Injection Defense — Block role hijacking, instruction override, data exfiltration
⚠️ Dangerous Command Guard — Interceptrm -rf,curl|bash, privilege escalation- 🔄 Auto-Sanitization — Replace secrets with placeholders, restore on response
- 💰 Token Budget Control — Daily/monthly limits with cost alerts
- 📊 Real-time Dashboard — Web UI with per-agent config, detection details, quick tests
The vault includes a transparent proxy gateway module that intercepts traffic between your AI tools and external APIs (OpenAI, Anthropic, etc.).
# Install from ClawHub
openclaw skills install tophant-clawvault
# Or install via clawhub CLI
clawhub install tophant-clawvaultClawHub: https://clawhub.ai/Martin2877/tophant-clawvault
The skill provides AI-guided installation and management:
/clawvault install --mode quick- Quick setup/clawvault health- Check status/clawvault generate-rule "Block AWS credentials"- Create security rules/clawvault test --category all- Run detection tests
See skills/tophant-clawvault/ for skill documentation.
# Install
pip install -e .
# Start (proxy + dashboard)
clawvault start
# Scan text
clawvault scan "password=MySecret key=sk-proj-abc123"
# Interactive demo
clawvault demo# One command: pack, upload, install
./scripts/deploy.sh <server-ip> root
# On server: setup integration + start
./scripts/setup.sh
./scripts/start.sh| Script | Usage |
|---|---|
scripts/deploy.sh <ip> [user] |
Deploy to cloud server |
scripts/start.sh |
Start ClawVault (add --with-openclaw to also start OpenClaw) |
scripts/stop.sh |
Stop all services |
scripts/test.sh |
Run CLI + API tests |
scripts/setup.sh |
Setup OpenClaw proxy integration |
scripts/uninstall.sh |
Uninstall and restore original state |
OpenClaw
│
▼
┌─────────────────────────────────┐
│ ClawVault (Security Vault) │
├─────────────────────────────────┤
│ Gateway Module │
│ • Transparent Proxy :8765 │
│ • Traffic Interception │
├─────────────────────────────────┤
│ Detection Engine │
│ • Sensitive data │
│ • Injection patterns │
│ • Dangerous commands │
├─────────────────────────────────┤
│ Guard / Sanitizer │
│ • Allow / Block / Sanitize │
├─────────────────────────────────┤
│ Audit + Monitor │
│ • SQLite storage │
│ • Token budget tracking │
├─────────────────────────────────┤
│ Dashboard │
│ • Web UI :8766 │
│ • Agent config & tests │
└─────────────────────────────────┘
# ~/.ClawVault/config.yaml
proxy:
port: 8765
intercept_hosts: ["api.openai.com", "api.anthropic.com"]
guard:
mode: "interactive" # interactive | strict | permissive
monitor:
daily_token_budget: 50000| Capability Module | Status | Notes |
|---|---|---|
| API Gateway Monitoring & Interception | ✅ Implemented | V1 core capability |
| File-side Monitoring | 🚧 In Progress | Gradual integration |
| Agent-level Atomic Control | 🚧 In Progress | Gateway-side available, expanding to other scenarios |
| Generative Policy Orchestration | 🚧 In Progress | Gradual integration |
| Document | Description |
|---|---|
| Development Setup | Local dev environment |
| Production Deployment | Deploy to server |
| OpenClaw Integration | Connect with OpenClaw |
| Architecture | System design & modules |
| Guard Modes | strict / interactive / permissive |
| Scenarios | Use cases & roadmap |
See doc/ for the full documentation index.
git clone https://github.com/tophant-ai/ClawVault.git
cd ClawVault
python3 -m venv venv && source venv/bin/activate
pip install -e ".[dev]"
pytestMIT © 2026 Tophant
- GitHub Issues — Bug reports and feature requests
- Security Issues — Security vulnerability reports
🦞 Built for people who want to secure AI, not babysit agents.


