Mini-A is a minimalist autonomous agent that uses LLMs, shell commands and/or MCP servers to achieve user-defined goals. Simple, flexible, and easy to use as a library, CLI tool, or embedded interface.
⚡ New Performance Optimizations! Mini-A now includes automatic optimizations that reduce token usage by 40-60% and costs by 50-70% with zero configuration. Learn more →
flowchart LR
User((You)) -->|Goal & Parameters| MiniA[Mini-A Orchestrator]
MiniA -->|Reasoning & Planning| LLM["LLM Models (Main & Low-Cost)"]
MiniA -->|Tool Invocations| MCP["MCP Servers (Time, Finance, etc.)"]
MiniA -->|Shell Tasks| Shell["Optional Shell"]
MCP -->|Structured Data| MiniA
Shell -->|Command Output| MiniA
LLM -->|Thoughts & Drafts| MiniA
MiniA -->|Final Response| User
classDef node fill:#2563eb,stroke:#1e3a8a,stroke-width:2px,color:#fff
classDef peripheral fill:#bfdbfe,stroke:#1d4ed8,color:#1e3a8a
class User node
class MiniA node
class LLM,MCP,Shell peripheral
Two steps to use:
-
Set
OAF_MODELenvironment variable to the model you want to use:export OAF_MODEL="(type: openai, model: gpt-5-mini, key: '...', timeout: 900000, temperature: 1)"
Use the built-in model manager when you prefer to store encrypted definitions instead of exporting raw environment variables:
mini-a modelman=true
The manager lets you create, import, rename, export, and delete reusable definitions that can then be exported as
OAF_MODEL/OAF_LC_MODELvalues or copied as raw SLON/JSON for sharing. -
Run the console:
opack exec mini-aType your goal at the prompt, or pass it inline:
opack exec mini-a goal="your goal"
If you enabled the optional alias displayed after installation, simply run
mini-a .... Inside the console you can inspect active parameters with slash commands;/showlists them all and/show usefilters to parameters beginning withuse. Conversation cleanup commands are also available:/compact [n]condenses older user/assistant turns into a single summary message while keeping the most recentnexchanges, and/summarize [n]generates a full narrative summary entry that replaces the earlier history while preserving the latest messages so the session can continue with a condensed context window. When you need to revisit prior output,/last [md]reprints the previous final answer (addmdto emit the raw Markdown), and/save <path>writes that answer directly to disk.Tab-complete tips: Slash commands that accept file paths (such as
/save) now support inline filesystem completion, so you can press Tab to expand directories and filenames instead of typing the whole path.Tip: Include file contents in your goals using
@path/to/filesyntax (e.g.,Follow these instructions @docs/guide.md).
Shell access is disabled by default for safety; add useshell=true when you explicitly want the agent to run commands.
For browser UI, start ./mini-a-web.sh onport=8888 after exporting the model settings and open http://localhost:8888.
Mini-A can run in Docker containers for isolated execution and portability:
CLI console:
docker run --rm -ti \
-e OPACKS=mini-a -e OPACK_EXEC=mini-a \
-e OAF_MODEL="(type: openai, model: gpt-5-mini, key: '...', timeout: 900000)" \
openaf/oaf:edgeWeb interface:
docker run -d --rm \
-e OPACKS=mini-a -e OPACK_EXEC=mini-a \
-e OAF_MODEL="(type: openai, model: gpt-5-mini, key: '...', timeout: 900000)" \
-p 12345:12345 \
openaf/oaf:edge onport=12345Goal execution:
docker run --rm \
-e OPACKS=mini-a \
-e OAF_MODEL="(type: openai, model: gpt-5-mini, key: '...', timeout: 900000)" \
openaf/oaf:edge \
ojob mini-a/mini-a.yaml goal="your goal here" useshell=trueSee USAGE.md for comprehensive Docker examples including multiple MCPs, AWS Bedrock, planning workflows, and more.
List files:
mini-a goal="list all JavaScript files in this directory" useshell=trueUsing MCP servers:
mini-a goal="what time is it in Sydney?" mcp="(cmd: 'ojob mcps/mcp-time.yaml', timeout: 5000)"Aggregate MCP tools via proxy (single tool exposed):
mini-a goal="compare release dates across APIs" \
usetools=true mcpproxy=true \
mcp="[(cmd: 'ojob mcps/mcp-time.yaml'), (cmd: 'ojob mcps/mcp-fin.yaml')]" \
useutils=trueThis keeps the LLM context lean by exposing a single proxy-dispatch tool even when multiple MCP servers and the Mini Utils Tool are active. See docs/MCPPROXY-FEATURE.md for a deep dive.
Chatbot mode:
mini-a goal="help me plan a vacation in Lisbon" chatbotmode=true- Install OpenAF from openaf.io
- Install oPack:
opack install mini-a
- Set your model configuration (see Quick Start above)
- Start using Mini-A via
opack exec mini-a(or themini-aalias if you added it)!
- Multi-Model Support - Works with OpenAI, Google Gemini, GitHub Models, AWS Bedrock, Ollama, and more
- Dual-Model Cost Optimization - Use a low-cost model for routine steps with smart escalation (see USAGE.md)
- Built-in Performance Optimizations - Automatic context management, dynamic escalation, and parallel action support deliver 40-60% token reduction and 50-70% cost savings (see docs/OPTIMIZATIONS.md)
- MCP Integration - Seamless integration with Model Context Protocol servers (STDIO & HTTP)
- Dynamic Tool Selection - Intelligent filtering of MCP tools using stemming, synonyms, n-grams, and fuzzy matching (
mcpdynamic=true) - Tool Caching - Smart caching for deterministic and read-only tools to avoid redundant operations
- Circuit Breakers - Automatic connection health management with cooldown periods
- Lazy Initialization - Deferred MCP connection establishment for faster startup (
mcplazy=true) - Proxy Aggregation - Collapse all MCP connections (including Mini Utils Tool) into a single
proxy-dispatchtool to minimize context usage (mcpproxy=true)
- Dynamic Tool Selection - Intelligent filtering of MCP tools using stemming, synonyms, n-grams, and fuzzy matching (
- Built-in MCP Servers - Database, file system, network, time/timezone, email, S3, RSS, Yahoo Finance, SSH, and more
- MCP Self-Hosting - Expose Mini-A itself as a MCP server via
mcps/mcp-mini-a.yaml(remote callers can run goals with limited formatting/planning overrides while privileged flags stay server-side) - Optional Shell Access - Execute shell commands with safety controls and sandboxing
- Web UI - Lightweight embedded chat interface for interactive use
- Planning Mode - Generate and execute structured task plans for complex goals
- Plan Validation - LLM-based critique validates plans before execution
- Dynamic Replanning - Automatic plan adjustments when obstacles occur
- Phase Verification - Auto-generated verification tasks ensure phase completion
- Mode Presets - Quick configuration bundles (shell, chatbot, web, etc.) - see USAGE.md
- Conversation Persistence - Save and resume conversations across sessions
- Rate Limiting - Built-in rate limiting for API usage control
- Metrics & Observability - Comprehensive runtime metrics for monitoring and cost tracking
- Enhanced Visual Output - UTF-8 box-drawing characters, ANSI color codes, and emoji for rich terminal displays (
useascii=true)
- What's New - Latest performance improvements and migration guide
- Quick Reference Cheatsheet - Fast lookup for all parameters and common patterns
- Performance Optimizations - Built-in optimizations for token reduction and cost savings
- MCP Proxy Guide - How to consolidate multiple MCP connections behind one
proxy-dispatchtool - Usage Guide - Comprehensive guide covering all features
- MCP Documentation - Built-in MCP servers catalog
- Creating MCPs - Build custom MCP integrations
- External MCPs - Community MCP servers
- Contributing Guide - Join the project
- Code of Conduct - Community standards
Mini-A ships with complementary components:
mini-a.yaml- Core oJob definition that implements the agent workflowmini-a-con.js- Interactive console available throughopack exec mini-a(or themini-aalias)mini-a.sh- Shell wrapper script for running directly from a cloned repositorymini-a.js- Reusable library for embedding in other OpenAF jobsmini-a-web.sh/mini-a-web.yaml- Lightweight HTTP server for browser UImini-a-modes.yaml- Built-in configuration presets for common use cases (can be extended with~/.openaf-mini-a_modes.yaml)public/- Browser interface assets
| Option | Description | Default |
|---|---|---|
goal |
Objective the agent should achieve | Required |
youare |
Override the opening persona sentence in the system prompt (inline text or @file path) to craft specialized agents |
"You are a goal-oriented agent running in background." (Mini-A still appends the step-by-step/no-feedback directives automatically) |
chatyouare |
Override the chatbot persona sentence when chatbotmode=true (inline text or @file path) |
"You are a helpful conversational AI assistant." |
useshell |
Allow shell command execution | false |
readwrite |
Allow file system modifications | false |
mcp |
MCP server configuration (single or array) | - |
usetools |
Register MCP tools with the model | false |
mcpproxy |
Aggregate all MCP connections (and Mini Utils Tool) under a single proxy-dispatch tool to save context |
false |
chatbotmode |
Conversational assistant mode | false |
useplanning |
Enable task planning workflow with validation and dynamic replanning | false |
useascii |
Enable enhanced UTF-8/ANSI visual output with colors and emojis | false |
mode |
Apply preset from mini-a-modes.yaml or ~/.openaf-mini-a_modes.yaml |
- |
modelman |
Launch the interactive model definitions manager | false |
maxsteps |
Maximum steps before forcing final answer | 15 |
rpm |
Rate limit (requests per minute) | - |
shellprefix |
Override the prefix appended to each shell command in stored plans | - |
verbose / debug |
Enable detailed logging | false |
For the complete list and detailed explanations, see the Usage Guide.
Examples for different providers:
OpenAI:
export OAF_MODEL="(type: openai, model: gpt-5-mini, key: ..., timeout: 900000, temperature: 1)"Google Gemini:
export OAF_MODEL="(type: gemini, model: gemini-2.5-flash-lite, key: ..., timeout: 900000, temperature: 0)"
# Required for Gemini models:
export OAF_MINI_A_NOJSONPROMPT=trueGitHub Models:
export OAF_MODEL="(type: openai, url: 'https://models.github.ai/inference', model: openai/gpt-5-nano, key: $(gh auth token), timeout: 900000, temperature: 1, apiVersion: '')"AWS Bedrock (requires OpenAF AWS oPack):
export OAF_MODEL="(type: bedrock, timeout: 900000, options: (model: 'amazon.nova-pro-v1:0', temperature: 0))"Ollama (local):
export OAF_MODEL="(type: ollama, model: 'gemma3', url: 'http://ollama.local', timeout: 900000)"Dual-model for cost optimization:
# High-capability model for complex reasoning
export OAF_MODEL="(type: openai, model: gpt-4, key: '...')"
# Low-cost model for routine operations
export OAF_LC_MODEL="(type: openai, model: gpt-3.5-turbo, key: '...')"For more model configurations and recommendations, see USAGE.md.
Mini-A includes built-in security features:
- Command Filtering - Dangerous commands blocked by default
- Interactive Confirmation - Optional approval for each command (
checkall=true) - Read-Only Mode - File system protection enabled by default
- Shell Isolation - Shell access disabled by default
- Sandboxing Support - Use
shell=...prefix for Docker, Podman, or OS sandboxes
Example with Docker sandbox:
docker run -d --rm --name mini-a-sandbox -v "$PWD":/work -w /work ubuntu:24.04 sleep infinity
mini-a goal="analyze files" useshell=true shell="docker exec mini-a-sandbox"See USAGE.md for detailed security information and sandboxing strategies.
We welcome contributions! Please see our Contributing Guide for details on:
- Code contribution process
- Development setup
- Pull request guidelines
- Community standards
Run the test suite from the repository root:
ojob tests/autoTestAll.yamlThe run generates an autoTestAll.results.json file with detailed results—inspect it locally and delete it before your final commit.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: [email protected]
Please read our Code of Conduct before participating.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
