MemU is a 7Γ24 proactive memory framework that continuously learns, anticipates, and adapts. It transforms passive LLM backends into intelligent agents with always-on memory that proactively surfaces insights, predicts needs, and evolves context without explicit queries.
If you find memU useful or interesting, a GitHub Star βοΈ would be greatly appreciated.
| Capability | Description |
|---|---|
| π Continuous Learning | 7Γ24 memory extraction from every interactionβconversations, documents, actions |
| π― Proactive Retrieval | Anticipates information needs before being asked, surfaces relevant context automatically |
| π§ Context Evolution | Memory structure adapts in real-time based on usage patterns and emerging topics |
| π Dual Intelligence | Fast embedding-based recall + deep LLM reasoning for comprehensive understanding |
| π¨ Multimodal Awareness | Unified memory across text, images, audio, videoβremembers what it sees and hears |
Unlike traditional retrieval systems that wait for queries, MemU operates in continuous mode:
| Traditional RAG | MemU Proactive Memory |
|---|---|
| β Waits for explicit queries | β Monitors context continuously |
| β Reactive information retrieval | β Anticipates information needs |
| β Static knowledge base | β Self-evolving memory structure |
| β One-time processing | β Always-on learning pipeline |
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. CONTINUOUS INGESTION β
β ββ Every conversation, document, action β
β automatically processed 7Γ24 β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2. REAL-TIME EXTRACTION β
β ββ Immediate memory item creation β
β No batch delays, instant availability β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β 3. PROACTIVE STRUCTURING β
β ββ Auto-categorization into evolving topics β
β Hierarchical organization adapts to usage β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β 4. ANTICIPATORY RETRIEVAL β
β ββ Surfaces relevant memory without prompting β
β Context-aware suggestions and insights β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Agent monitors conversation context and proactively surfaces relevant memories
# User starts discussing a topic
User: "I'm thinking about that project..."
# MemU automatically retrieves without explicit query:
- Previous project discussions
- Related preferences and constraints
- Past decisions and their outcomes
- Relevant documents and resources
Agent: "Based on your previous work on the dashboard redesign,
I noticed you preferred Material UI components..."Agent anticipates upcoming needs based on patterns
# Morning routine detection
User logs in at 9 AM (usual time)
# MemU proactively surfaces:
- Daily standup talking points
- Overnight notifications summary
- Priority tasks based on past behavior
- Relevant context from yesterday's work
Agent: "Good morning! Here's what's relevant today..."System self-organizes without manual intervention
# As interactions accumulate:
β Automatically creates new categories for emerging topics
β Consolidates related memories across modalities
β Identifies patterns and extracts higher-level insights
β Prunes outdated information while preserving history
# Result: Always-optimized memory structureMemU's three-layer system enables both reactive queries and proactive context loading:
| Layer | Reactive Use | Proactive Use |
|---|---|---|
| Resource | Direct access to original data | Background monitoring for new patterns |
| Item | Targeted fact retrieval | Real-time extraction from ongoing interactions |
| Category | Summary-level overview | Automatic context assembly for anticipation |
Proactive Benefits:
- Auto-categorization: New memories self-organize into topics
- Pattern Detection: System identifies recurring themes
- Context Prediction: Anticipates what information will be needed next
Experience proactive memory instantly:
π memu.so - Hosted service with 7Γ24 continuous learning
For enterprise deployment with custom proactive workflows, contact info@nevamind.ai
| Base URL | https://api.memu.so |
|---|---|
| Auth | Authorization: Bearer YOUR_API_KEY |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v3/memory/memorize |
Register continuous learning task |
GET |
/api/v3/memory/memorize/status/{task_id} |
Check real-time processing status |
POST |
/api/v3/memory/categories |
List auto-generated categories |
POST |
/api/v3/memory/retrieve |
Query memory (supports proactive context loading) |
pip install -e .Requirements: Python 3.13+ and an OpenAI API key
Test Continuous Learning (in-memory):
export OPENAI_API_KEY=your_api_key
cd tests
python test_inmemory.pyTest with Persistent Storage (PostgreSQL):
# Start PostgreSQL with pgvector
docker run -d \
--name memu-postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=memu \
-p 5432:5432 \
pgvector/pgvector:pg16
# Run continuous learning test
export OPENAI_API_KEY=your_api_key
cd tests
python test_postgres.pyBoth examples demonstrate proactive memory workflows:
- Continuous Ingestion: Process multiple files sequentially
- Auto-Extraction: Immediate memory creation
- Proactive Retrieval: Context-aware memory surfacing
See tests/test_inmemory.py and tests/test_postgres.py for implementation details.
MemU supports custom LLM and embedding providers beyond OpenAI. Configure them via llm_profiles:
from memu import MemUService
service = MemUService(
llm_profiles={
# Default profile for LLM operations
"default": {
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"api_key": "your_api_key",
"chat_model": "qwen3-max",
"client_backend": "sdk" # "sdk" or "http"
},
# Separate profile for embeddings
"embedding": {
"base_url": "https://api.voyageai.com/v1",
"api_key": "your_voyage_api_key",
"embed_model": "voyage-3.5-lite"
}
},
# ... other configuration
)MemU supports OpenRouter as a model provider, giving you access to multiple LLM providers through a single API.
from memu import MemoryService
service = MemoryService(
llm_profiles={
"default": {
"provider": "openrouter",
"client_backend": "httpx",
"base_url": "https://openrouter.ai",
"api_key": "your_openrouter_api_key",
"chat_model": "anthropic/claude-3.5-sonnet", # Any OpenRouter model
"embed_model": "openai/text-embedding-3-small", # Embedding model
},
},
database_config={
"metadata_store": {"provider": "inmemory"},
},
)| Variable | Description |
|---|---|
OPENROUTER_API_KEY |
Your OpenRouter API key from openrouter.ai/keys |
| Feature | Status | Notes |
|---|---|---|
| Chat Completions | Supported | Works with any OpenRouter chat model |
| Embeddings | Supported | Use OpenAI embedding models via OpenRouter |
| Vision | Supported | Use vision-capable models (e.g., openai/gpt-4o) |
export OPENROUTER_API_KEY=your_api_key
# Full workflow test (memorize + retrieve)
python tests/test_openrouter.py
# Embedding-specific tests
python tests/test_openrouter_embedding.py
# Vision-specific tests
python tests/test_openrouter_vision.pySee examples/example_4_openrouter_memory.py for a complete working example.
Processes inputs in real-time and immediately updates memory:
```python result = await service.memorize( resource_url="path/to/file.json", # File path or URL modality="conversation", # conversation | document | image | video | audio user={"user_id": "123"} # Optional: scope to a user ){ "resource": {...}, # Stored resource metadata "items": [...], # Extracted memory items (available instantly) "categories": [...] # Auto-updated category structure }
**Proactive Features:**
- Zero-delay processingβmemories available immediately
- Automatic categorization without manual tagging
- Cross-reference with existing memories for pattern detection
### `retrieve()` - Dual-Mode Intelligence
MemU supports both **proactive context loading** and **reactive querying**:
<img width="100%" alt="retrieve" src="assets/retrieve.png" />
#### RAG-based Retrieval (`method="rag"`)
Fast **proactive context assembly** using embeddings:
- β
**Instant context**: Sub-second memory surfacing
- β
**Background monitoring**: Can run continuously without LLM costs
- β
**Similarity scoring**: Identifies most relevant memories automatically
#### LLM-based Retrieval (`method="llm"`)
Deep **anticipatory reasoning** for complex contexts:
- β
**Intent prediction**: LLM infers what user needs before they ask
- β
**Query evolution**: Automatically refines search as context develops
- β
**Early termination**: Stops when sufficient context is gathered
#### Comparison
| Aspect | RAG (Fast Context) | LLM (Deep Reasoning) |
|--------|-------------------|---------------------|
| **Speed** | β‘ Milliseconds | π’ Seconds |
| **Cost** | π° Embedding only | π°π° LLM inference |
| **Proactive use** | Continuous monitoring | Triggered context loading |
| **Best for** | Real-time suggestions | Complex anticipation |
#### Usage
```python
# Proactive retrieval with context history
result = await service.retrieve(
queries=[
{"role": "user", "content": {"text": "What are their preferences?"}},
{"role": "user", "content": {"text": "Tell me about work habits"}}
],
where={"user_id": "123"}, # Optional: scope filter
method="rag" # or "llm" for deeper reasoning
)
# Returns context-aware results:
{
"categories": [...], # Relevant topic areas (auto-prioritized)
"items": [...], # Specific memory facts
"resources": [...], # Original sources for traceability
"next_step_query": "..." # Predicted follow-up context
}
Proactive Filtering: Use where to scope continuous monitoring:
where={"user_id": "123"}- User-specific contextwhere={"agent_id__in": ["1", "2"]}- Multi-agent coordination- Omit
wherefor global context awareness
π For complete API documentation, see SERVICE_API.md - includes proactive workflow patterns, pipeline configuration, and real-time update handling.
Continuously learns from every interaction without explicit memory commands:
export OPENAI_API_KEY=your_api_key
python examples/example_1_conversation_memory.pyProactive Behavior:
- Automatically extracts preferences from casual mentions
- Builds relationship models from interaction patterns
- Surfaces relevant context in future conversations
- Adapts communication style based on learned preferences
Best for: Personal AI assistants, customer support that remembers, social chatbots
Learns from execution logs and proactively suggests optimizations:
export OPENAI_API_KEY=your_api_key
python examples/example_2_skill_extraction.pyProactive Behavior:
- Monitors agent actions and outcomes continuously
- Identifies patterns in successes and failures
- Auto-generates skill guides from experience
- Proactively suggests strategies for similar future tasks
Best for: DevOps automation, agent self-improvement, knowledge capture
Unifies memory across different input types for comprehensive context:
export OPENAI_API_KEY=your_api_key
python examples/example_3_multimodal_memory.pyProactive Behavior:
- Cross-references text, images, and documents automatically
- Builds unified understanding across modalities
- Surfaces visual context when discussing related topics
- Anticipates information needs by combining multiple sources
Best for: Documentation systems, learning platforms, research assistants
MemU achieves 92.09% average accuracy on the Locomo benchmark across all reasoning tasks, demonstrating reliable proactive memory operations.
View detailed experimental data: memU-experiment
| Repository | Description | Proactive Features |
|---|---|---|
| memU | Core proactive memory engine | 7Γ24 learning pipeline, auto-categorization |
| memU-server | Backend with continuous sync | Real-time memory updates, webhook triggers |
| memU-ui | Visual memory dashboard | Live memory evolution monitoring |
Quick Links:
- π Try MemU Cloud
- π API Documentation
- π¬ Discord Community
We welcome contributions from the community! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated.
To start contributing to MemU, you'll need to set up your development environment:
- Python 3.13+
- uv (Python package manager)
- Git
# 1. Fork and clone the repository
git clone https://github.com/YOUR_USERNAME/memU.git
cd memU
# 2. Install development dependencies
make installThe make install command will:
- Create a virtual environment using
uv - Install all project dependencies
- Set up pre-commit hooks for code quality checks
Before submitting your contribution, ensure your code passes all quality checks:
make checkThe make check command runs:
- Lock file verification: Ensures
pyproject.tomlconsistency - Pre-commit hooks: Lints code with Ruff, formats with Black
- Type checking: Runs
mypyfor static type analysis - Dependency analysis: Uses
deptryto find obsolete dependencies
For detailed contribution guidelines, code standards, and development practices, please see CONTRIBUTING.md.
Quick tips:
- Create a new branch for each feature or bug fix
- Write clear commit messages
- Add tests for new functionality
- Update documentation as needed
- Run
make checkbefore pushing
- GitHub Issues: Report bugs & request features
- Discord: Join the community
- X (Twitter): Follow @memU_ai
- Contact: info@nevamind.ai
β Star us on GitHub to get notified about new releases!



