MCP server that converts successful conversation threads into prompts that can be used for future tasks.
Based on the principle that the most important artifact of your LLM interactions is what you did to produce the results, not the results themselves (see The New Code). And also considering that LLMs are probably already better prompt engineers than humans.
prompt-saver-mcp-demo.mp4
Summarizes, categorizes, and converts conversation history into a markdown formatted prompt template.
Run upon completion of a successful complex task to build your prompt library.
Parameters:
conversation_messages
(string): JSON string containing the conversation historytask_description
(optional string): Description of the task being performedcontext_info
(optional string): Additional context about the conversation
Retrieves prompts from the database using semantic search (if Voyage API key is available) or text search as fallback. Returns the most relevant prompts for user selection.
Parameters:
query
(string): Description of the problem or task you need help withlimit
(optional int): Maximum number of prompts to return (default: 3)
Summarizes feedback during the conversation and updates the prompt based on the feedback and conversation context.
Parameters:
prompt_id
(string): ID of the prompt to improvefeedback
(string): User feedback about the prompt's effectivenessconversation_context
(optional string): Context from the conversation where the prompt was used
For manual updates to an existing prompt.
Parameters:
prompt_id
(string): ID of the prompt to updatechange_description
(string): Description of what was changed and whysummary
(optional string): Updated summaryprompt_template
(optional string): Updated prompt templatehistory
(optional string): Updated historyuse_case
(optional string): Updated use case
Get detailed information about a specific prompt including its full template, history, and metadata. Used to view the complete prompt before applying it.
Parameters:
prompt_id
(string): ID of the prompt to retrieve
Search for prompts by their use case category (e.g., 'code-gen', 'text-gen', 'data-analysis'). Efficient category-based search that works independently of embedding services.
Parameters:
use_case
(string): The use case category to search forlimit
(optional int): Maximum number of results to return (default: 5)
The generated prompt templates use the following prompt engineering techniques:
Templates are organized in this order:
- Identity: Defines the assistant's persona and goals
- Instructions: Provides clear rules and constraints
- Examples: Shows desired input/output patterns (few-shot learning)
- Context: Adds relevant data and documents
- Markdown headers (#) and lists (*) for logical hierarchy
- XML tags (
<example>
) to separate content sections - Message roles (developer/user/assistant) where appropriate
- Placeholders (
{variable}
) for customizable inputs
This ensures the saved prompts are well-structured, reusable, and effective for future tasks.
To maximize the value of this MCP server, add the following instructions to your LLM interface's system prompt:
Always search for relevant prompts before starting any large or complex tasks.
Upon successful completion of a task, always ask if I want to save the conversation as a prompt.
Upon successful completion of a task with a prompt, always ask if I want to update the prompt.
This helps ensure that the LLM runs the relevant tools without you explicitly asking.
💡 Tip: For enhanced MongoDB management, consider using the MongoDB MCP Server alongside this prompt saver. It provides direct MongoDB operations and can help you manage your prompt database more effectively.
-
Clone the repository:
git clone <repository-url> cd prompt-saver-mcp
-
Install dependencies:
uv sync # or with pip: pip install -e .
-
Set up MongoDB Atlas:
- Create a MongoDB Atlas cluster
- Create a database named
prompt_saver
- (Optional) Create a vector search index on the
embedding
field (2048 dimensions, dotProduct similarity) for semantic search
-
Configure environment variables:
cp .env.example .env # Edit .env with your API keys and MongoDB Atlas URI
Required for all configurations:
MONGODB_URI
: MongoDB Atlas connection string
Optional:
VOYAGE_AI_API_KEY
: Voyage AI API key (enables semantic search)
Choose one LLM provider:
- Azure OpenAI:
AZURE_OPENAI_API_KEY
,AZURE_OPENAI_ENDPOINT
- OpenAI:
OPENAI_API_KEY
- Anthropic:
ANTHROPIC_API_KEY
Variable | Description | Default |
---|---|---|
MONGODB_URI |
MongoDB Atlas connection URI | Required |
MONGODB_DATABASE |
Database name | prompt_saver |
MONGODB_COLLECTION |
Collection name | prompts |
VOYAGE_AI_API_KEY |
Voyage AI API key (enables semantic search) | Optional |
VOYAGE_AI_EMBEDDING_MODEL |
Embedding model | voyage-3-large |
Variable | Description | Default |
---|---|---|
LLM_PROVIDER |
LLM provider: azure_openai , openai , or anthropic |
azure_openai |
Provider Options:
- Azure OpenAI: Enterprise-grade with enhanced security and compliance
- OpenAI: Direct access to latest models with simple API key setup
- Anthropic: Claude models with strong reasoning capabilities
Variable | Description | Default |
---|---|---|
AZURE_OPENAI_API_KEY |
Azure OpenAI API key | Required |
AZURE_OPENAI_ENDPOINT |
Azure OpenAI endpoint | Required |
AZURE_OPENAI_MODEL |
Model deployment name | gpt-4o |
Variable | Description | Default |
---|---|---|
OPENAI_API_KEY |
OpenAI API key | Required |
OPENAI_MODEL |
Model name | gpt-4o |
Variable | Description | Default |
---|---|---|
ANTHROPIC_API_KEY |
Anthropic API key | Required |
ANTHROPIC_MODEL |
Model name | claude-sonnet-4-20250514 |
Add to your Claude Desktop configuration file. Choose one of the following configurations based on your preferred LLM provider:
{
"mcpServers": {
"prompt-saver": {
"command": "uv",
"args": ["run", "python", "-m", "prompt_saver_mcp.server", "stdio"],
"cwd": "/path/to/your/prompt-saver-mcp",
"env": {
"MONGODB_URI": "mongodb+srv://username:[email protected]/",
"MONGODB_DATABASE": "prompt_saver",
"VOYAGE_AI_API_KEY": "your_voyage_ai_api_key_here",
"LLM_PROVIDER": "azure_openai",
"AZURE_OPENAI_API_KEY": "your_azure_openai_api_key_here",
"AZURE_OPENAI_ENDPOINT": "https://your-resource.openai.azure.com/",
"AZURE_OPENAI_MODEL": "gpt-4o"
}
}
}
}
{
"mcpServers": {
"prompt-saver": {
"command": "uv",
"args": ["run", "python", "-m", "prompt_saver_mcp.server", "stdio"],
"cwd": "/path/to/your/prompt-saver-mcp",
"env": {
"MONGODB_URI": "mongodb+srv://username:[email protected]/",
"MONGODB_DATABASE": "prompt_saver",
"VOYAGE_AI_API_KEY": "your_voyage_ai_api_key_here",
"LLM_PROVIDER": "openai",
"OPENAI_API_KEY": "your_openai_api_key_here",
"OPENAI_MODEL": "gpt-4o"
}
}
}
}
{
"mcpServers": {
"prompt-saver": {
"command": "uv",
"args": ["run", "python", "-m", "prompt_saver_mcp.server", "stdio"],
"cwd": "/path/to/your/prompt-saver-mcp",
"env": {
"MONGODB_URI": "mongodb+srv://username:[email protected]/",
"MONGODB_DATABASE": "prompt_saver",
"VOYAGE_AI_API_KEY": "your_voyage_ai_api_key_here",
"LLM_PROVIDER": "anthropic",
"ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
"ANTHROPIC_MODEL": "claude-sonnet-4-20250514"
}
}
}
}
Note: For Anthropic support, install the anthropic package:
pip install anthropic
After completing a complex task, save the conversation as a reusable prompt:
# Example conversation messages (JSON format)
conversation = [
{"role": "user", "content": "Help me create a Python function to parse CSV files"},
{"role": "assistant", "content": "I'll help you create a robust CSV parser..."},
# ... more conversation
]
# Save the prompt
save_prompt(
conversation_messages=json.dumps(conversation),
task_description="Creating a CSV parser function",
context_info="Successfully created a parser with error handling"
)
Search for relevant prompts when starting a new task:
# Search for prompts
result = search_prompts("I need help with data processing in Python")
# The tool will return the most relevant prompts and ask you to select one
After using a prompt, update it based on your experience:
update_prompt(
prompt_id="prompt_id_here",
change_description="Added error handling examples",
prompt_template="Updated template with better error handling..."
)
Retrieve the full details of a specific prompt:
# Get complete prompt information
result = get_prompt_details("prompt_id_here")
print(result["prompt"]["prompt_template"]) # View the full template
Use AI to automatically improve a prompt based on feedback:
improve_prompt_from_feedback(
prompt_id="prompt_id_here",
feedback="The prompt worked well but could use more specific examples for edge cases",
conversation_context="Used for debugging a complex API integration issue"
)
Find prompts for specific types of tasks:
# Find all code generation prompts
result = search_prompts_by_use_case("code-gen", limit=5)
# Find data analysis prompts
result = search_prompts_by_use_case("data-analysis", limit=3)
Each prompt is stored with the following structure:
{
"_id": ObjectId,
"use_case": str, # "code-gen", "text-gen", "data-analysis", "creative", "general"
"summary": str, # Summary of the prompt and its use case
"prompt_template": str, # Universal problem-solving prompt template
"history": str, # Summary of steps taken and end result
"embedding": List[float], # Vector embeddings of the summary
"last_updated": datetime,
"num_updates": int,
"changelog": List[str] # List of changes made to this prompt
}
MIT License - see LICENSE file for details.