Background
I've been integrating CatchMe with CLI agents (Claude Desktop, Cursor, and Hermes Agent) and the current skill-file approach works — but it's limited to prose answers via catchme ask. Agents can't query specific nodes, get structured data back, or chain multiple calls efficiently without burning extra LLM tokens re-parsing natural language.
The entire agent ecosystem is converging on Model Context Protocol (MCP) as the standard tool interface. CatchMe is architecturally a perfect fit — it's already a local server with structured tree data — but right now it's invisible to MCP-compatible hosts.
The Gap
Current agent integration:
catchme ask -- "What was I working on this morning?"
# Returns: prose string only
Agents receive unstructured text and must re-parse it. There's no way to:
- Get raw structured node data
- Query a specific date range and get JSON back
- Chain
list_days → get_session → search_activity in one agent turn
- Expose CatchMe as a tool in Claude Desktop's MCP tool picker
Proposal: catchme mcp Command
Add a catchme mcp command that starts CatchMe as a local MCP server (stdio or HTTP/SSE transport), exposing a small set of tools:
| Tool |
Description |
search_activity(query, date_range?) |
Natural language search, returns structured node matches |
list_days() |
Returns available days with top-level summaries |
get_session(session_id) |
Returns full session detail with app/location breakdown |
get_tree(date) |
Returns the full activity tree for a given day as JSON |
Example MCP config (claude_desktop_config.json)
{
"mcpServers": {
"catchme": {
"command": "catchme",
"args": ["mcp"]
}
}
}
Once registered, any MCP-compatible agent (Claude Desktop, Cursor, Hermes, etc.) sees CatchMe as a native tool — no skill files, no CLI parsing, no prose round-trips.
Why This Matters
- Zero new capture logic — purely an output/interface layer over existing tree data
- Multiplies CatchMe's reach — every MCP host becomes a potential user automatically
- Structured outputs — agents get JSON they can reason over, not text they must re-parse
- Composable — an agent can call
list_days(), then get_session(), then search_activity() in a single turn with full context chaining
Implementation Notes
The MCP Python SDK (mcp[cli]) makes this relatively lightweight — the server just wraps the existing retrieve() generator and tree-loading functions already in catchme/pipelines/. A minimal stdio server could be ~100–150 lines.
Happy to discuss the approach or help prototype it if the direction makes sense.
Background
I've been integrating CatchMe with CLI agents (Claude Desktop, Cursor, and Hermes Agent) and the current skill-file approach works — but it's limited to prose answers via
catchme ask. Agents can't query specific nodes, get structured data back, or chain multiple calls efficiently without burning extra LLM tokens re-parsing natural language.The entire agent ecosystem is converging on Model Context Protocol (MCP) as the standard tool interface. CatchMe is architecturally a perfect fit — it's already a local server with structured tree data — but right now it's invisible to MCP-compatible hosts.
The Gap
Current agent integration:
Agents receive unstructured text and must re-parse it. There's no way to:
list_days → get_session → search_activityin one agent turnProposal:
catchme mcpCommandAdd a
catchme mcpcommand that starts CatchMe as a local MCP server (stdio or HTTP/SSE transport), exposing a small set of tools:search_activity(query, date_range?)list_days()get_session(session_id)get_tree(date)Example MCP config (
claude_desktop_config.json){ "mcpServers": { "catchme": { "command": "catchme", "args": ["mcp"] } } }Once registered, any MCP-compatible agent (Claude Desktop, Cursor, Hermes, etc.) sees CatchMe as a native tool — no skill files, no CLI parsing, no prose round-trips.
Why This Matters
list_days(), thenget_session(), thensearch_activity()in a single turn with full context chainingImplementation Notes
The MCP Python SDK (
mcp[cli]) makes this relatively lightweight — the server just wraps the existingretrieve()generator and tree-loading functions already incatchme/pipelines/. A minimal stdio server could be ~100–150 lines.Happy to discuss the approach or help prototype it if the direction makes sense.