Your AI tools forget. crossmem doesn't.
Cross-tool memory for AI coding agents. One pip install, zero cloud, zero accounts — your Claude Code, GitHub Copilot, and Gemini CLI sessions remember everything, across every project, automatically.
pip install crossmem # 1. Install
crossmem setup # 2. Done. All tools configured.That's it. Every AI coding session now starts with cross-project context.
Every AI coding session starts cold. The model re-asks the same setup questions, re-derives the same patterns, re-reads the same files — because it has no memory of what you worked through last time.
We measured the impact using tokenxray across real Claude Code sessions before and after installing crossmem:
| Metric | Delta |
|---|---|
| Avg input tokens / session | −74% |
| Avg cost / session | −83% |
| Cache hit rate | +18% |
| Avg turns to reach result | −60% |
Measured across real sessions. Results vary by project size and task type.
crossmem fixes this by injecting remembered context before the AI starts thinking — not as a suggestion it can ignore, but as enforced context.
You: "How should I handle credentials in this new service?"
AI: [crossmem recalls patterns from 3 of your projects]
Based on your backend-api, mobile-app, and infra-tools projects,
you use a middleware layer for credential masking — keys in
Secret Manager, never env vars, masked in logs via
_mask_sensitive_headers(). Applying the same pattern here.
No copy-pasting. No "I already solved this." Your AI remembers — across every project.
| crossmem | Mem0 | Letta | Zep | |
|---|---|---|---|---|
| Install | pip install + SQLite |
Cloud API key or self-hosted Qdrant | Server + Docker | Postgres + Go server |
| Cross-tool | Claude + Copilot + Gemini | Single app | Single app | Single app |
| Cross-project | All projects, one index | Per-app scoped | Per-agent scoped | Per-session scoped |
| Protocol | MCP-native | REST API | Custom framework | SDK |
| Infrastructure | None. Local SQLite. | Cloud or Qdrant + server | Letta server | Postgres + server |
| Enforcement | Hook injects context before generation | LLM decides to call API | LLM self-manages memory | LLM calls SDK |
| Tool | Auto-recall | How |
|---|---|---|
| Claude Code | SessionStart hook (startup/resume/compact) + UserPromptSubmit hook (every prompt) | crossmem install-hook |
| GitHub Copilot | Injects memories into copilot-instructions.md | crossmem install-hook --tool copilot |
| VS Code Agent Mode | SessionStart + UserPromptSubmit hooks (Preview) | crossmem install-hook --tool copilot-agent |
| Gemini CLI | Instruction in GEMINI.md | crossmem install-instructions |
| Claude Desktop | MCP server auto-called at session start | Add to claude_desktop_config.json |
cd ~/any-project
claude # Claude Code: hook fires, memories injected automatically
code . # Copilot: reads context pre-injected into copilot-instructions.md
gemini # Gemini: calls mem_recall via instruction in GEMINI.md
- Auto-ingest — pulls latest memories from Claude, Copilot, and Gemini native files
- Auto-init — first time in a project? Indexes README.md, CLAUDE.md, etc.
- Tiered recall — returns most relevant context within a token budget: curated memories > tool memories > CLAUDE.md > CONTRIBUTING.md > README.md
- Mid-session recall (Claude Code + VS Code Agent Mode) — every prompt is searched against your memories. Relevant context is injected before the model responds — no manual
mem_recallneeded. - Learn — AI saves new discoveries via
mem_saveduring sessions. Knowledge compounds. Each memory is written as a durable.mdfile to~/.crossmem/memories/<project>/— the DB is rebuilt from these files on startup, so memories survive a clean install. - Freshness tracking (v1.3.0) — every memory carries a
last_verifiedtimestamp.mem_recallandmem_searchsurface[verified: YYYY-MM-DD]or[unverified]next to each result so agents can judge trust level at a glance. Usemem_verify(id)to stamp a memory as confirmed without changing its content. - Smart search (v1.1.0) — a two-layer noise filter separates signal tokens from noise before every FTS query. Layer 1 excludes 168 linguistically fixed closed-class words (prepositions, pronouns, auxiliaries, etc.) in O(1). Layer 2 applies corpus-adaptive IDF via FTS5 — tokens that appear in more than 40% of your documents are treated as project-specific noise. Zero additional dependencies.
- Scope model (v1.2.0) — memories are either
project-scoped (visible only within their project) orglobal(surfaced everywhere). Memories saved identically across 2+ projects are automatically promoted to global viaauto_promote_patterns(). - WIP scope (v1.9.0) —
mem_save(..., scope="wip")marks in-progress context that should surface prominently at the next session start, then auto-demotes to project scope so it doesn't clutter every recall. Use it to leave a note for your next session: "Started auth rewrite, blocked on token refresh logic — resuming here." - Relevance scores (v1.9.0) — query-scoped
mem_recall(query=...)annotates each result with[rel: XX%]so the agent can immediately see match quality. Full-session recall (no query) is unaffected.
Add to your tool's MCP config so AI assistants can search, recall, and save memories in real-time:
Claude Code (~/.mcp.json)
{
"mcpServers": {
"crossmem": {
"command": "crossmem-server"
}
}
}GitHub Copilot (.vscode/mcp.json)
{
"servers": {
"crossmem": {
"command": "uvx",
"args": ["--from", "crossmem", "crossmem-server"]
}
}
}Gemini CLI (~/.gemini/settings.json)
{
"mcpServers": {
"crossmem": {
"command": "crossmem-server"
}
}
}Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json)
{
"mcpServers": {
"crossmem": {
"command": "crossmem-server"
}
}
}If
crossmem-serverisn't on PATH, useuvx --from crossmem crossmem-serverinstead.
| Tool | Description |
|---|---|
mem_recall |
Load project context + cross-project patterns at session start |
mem_search |
Search across all memories (query, project filter, limit) |
mem_save |
Save a discovery during a session (writes a durable .md file + DB row). scope="wip" for carry-over context that auto-demotes after next recall. |
mem_update |
Update a memory in place (preserves ID, syncs backing file) |
mem_forget |
Delete a memory by ID (removes backing file from disk) |
mem_get |
Get full content of a memory by ID |
mem_verify |
Mark a memory as verified today (no content change) |
mem_promote |
Promote a project-scoped memory to global scope |
mem_demote |
Demote a global memory back to project scope |
mem_deduplicate |
Scan for near-duplicate memories and remove redundant entries |
mem_init |
Index project documentation files |
mem_ingest |
Refresh the index from native tool memory files |
CLI reference
# Recall (runs automatically via hook)
crossmem recall # auto-detects project from cwd
crossmem recall -p backend-api # explicit project
crossmem recall --format copilot # marker-wrapped for Copilot injection
crossmem recall --format vscode # JSON for VS Code agent-mode hooks
# Search
crossmem search "JWT token rotation"
crossmem search "retry strategy" -p backend-api -n 5
# Save / Update / Delete
crossmem save "Always use middleware for credential masking" -p backend-api -s Patterns
crossmem update 42 "corrected content here"
crossmem forget 42
# Index project docs
crossmem init # current directory
crossmem init -p my-api --path ~/projects/api
# Hooks
crossmem install-hook # Claude Code (SessionStart + UserPromptSubmit)
crossmem install-hook --tool copilot # Copilot (workspace instructions)
crossmem install-hook --tool copilot --global # Copilot (all workspaces)
crossmem install-hook --tool copilot --if-stale # refresh if >30 min old
crossmem install-hook --tool copilot-agent # VS Code agent mode (.github/hooks/)
crossmem install-instructions # Gemini
# Internal (installed as hooks — not run manually)
crossmem prompt-search # mid-session recall via UserPromptSubmit
# Diagnose
crossmem doctor # health check: binary, DB, hooks, Gemini, FTS index
# Other
crossmem ingest # re-ingest tool memories
crossmem graph # visualize knowledge graph in browser
crossmem stats # database stats
crossmem setup # one-time: Claude hook + Copilot injection + Gemini instructions + ingest| Tool | Memory files |
|---|---|
| Claude Code | ~/.claude/projects/*/memory/*.md |
| Gemini CLI | ~/.gemini/GEMINI.md |
| GitHub Copilot (macOS) | ~/Library/Application Support/Code/User/globalStorage/github.copilot-chat/memory-tool/memories/*.md |
| GitHub Copilot (Linux) | ~/.config/Code/User/globalStorage/github.copilot-chat/memory-tool/memories/*.md |
| GitHub Copilot (Windows) | %APPDATA%\Code\User\globalStorage\github.copilot-chat\memory-tool\memories\*.md |
crossmem (via mem_save) |
~/.crossmem/memories/<project>/*.md |
Ingestion is pluggable — PRs welcome for new tools.
See CHANGELOG.md for release notes.
Found a bug? Want to add support for another AI tool? Open an issue or submit a PR.
If crossmem saves you from re-explaining your codebase to AI, consider giving it a star — it helps others find it.
MIT

