Skip to content

AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing compre

License

Notifications You must be signed in to change notification settings

AIDotNet/auto-prompt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

76 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AI Prompt Optimization Platform

AI Prompt Optimizer License .NET React

Professional AI Prompt Optimization, Debugging, and Sharing Platform

๐Ÿš€ Quick Start โ€ข ๐Ÿ“– Features โ€ข ๐Ÿ› ๏ธ Tech Stack โ€ข ๐Ÿ“ฆ Deployment Guide โ€ข ๐Ÿค Contribution Guide


๐Ÿ“‹ Project Overview

The AI Prompt Optimization Platform is a professional tool designed to help users optimize prompts for AI models, enhancing AI conversation effectiveness and response accuracy. The platform integrates intelligent optimization algorithms, deep inference analysis, visualization debugging tools, and community sharing features, providing comprehensive prompt optimization solutions for AI application developers and content creators.

๐ŸŽฏ Core Values

  • Intelligent Optimization: Automatically analyzes and optimizes prompt structures based on advanced AI algorithms.
  • Deep Inference: Offers multidimensional thinking analysis to deeply understand user needs.
  • Community Sharing: Discover and share high-quality prompt templates, exchange experiences with community users.
  • Visualization Debugging: Powerful debugging environment with real-time preview of prompt effects.

โœจ Features

๐Ÿง  Intelligent Prompt Optimization

  • Automatic Structure Analysis: In-depth analysis of the semantic structure and logical relationships of prompts.
  • Multidimensional Optimization: Optimizes from multiple dimensions such as clarity, accuracy, and completeness.
  • Deep Inference Mode: Enables AI deep thinking to provide detailed analysis processes.
  • Real-time Generation: Streamlined output of optimization results, view the generation process in real-time.

๐Ÿ“š Prompt Template Management

  • Template Creation: Save optimized prompts as reusable templates.
  • Tag Classification: Supports multi-tag classification management for easy searching and organization.
  • Favorite Function: Bookmark favorite templates for quick access to commonly used prompts.
  • Usage Statistics: Track template usage and feedback on effectiveness.

๐ŸŒ Community Sharing Platform

  • Public Sharing: Share high-quality templates with community users.
  • Popularity Rankings: Display popular templates based on views, likes, etc.
  • Search Discovery: Powerful search function to quickly find needed templates.
  • Interactive Communication: Social features like likes, comments, and bookmarks.

๐Ÿ”ง Debugging and Testing Tools

  • Visual Interface: Intuitive user interface simplifies operation processes.
  • Real-time Preview: Instantly view prompt optimization effects.
  • History Records: Save optimization history, support version comparison.
  • Export Functionality: Support exporting optimization results in various formats.

๐ŸŒ Multi-language Support

  • Language Switching: Supports switching between Chinese and English interfaces.
  • Real-time Translation: Switch languages without refreshing the page.
  • Localized Content: All interface elements are fully localized.
  • Browser Detection: Automatically detects language based on browser settings.

๐Ÿ› ๏ธ Tech Stack

Backend Technologies

  • Framework: .NET 9.0 + ASP.NET Core
  • AI Engine: Microsoft Semantic Kernel 1.54.0
  • Database: PostgreSQL + Entity Framework Core
  • Authentication: JWT Token Authentication
  • Logging: Serilog Structured Logging
  • API Documentation: Scalar OpenAPI

Frontend Technologies

  • Framework: React 19.1.0 + TypeScript
  • UI Components: Ant Design 5.25.3
  • Routing: React Router DOM 7.6.1
  • State Management: Zustand 5.0.5
  • Styling: Styled Components 6.1.18
  • Build Tool: Vite 6.3.5

Core Dependencies

  • AI Model Integration: OpenAI API Compatible Interface
  • Real-time Communication: Server-Sent Events (SSE)
  • Data Storage: IndexedDB (client-side cache)
  • Rich Text Editing: TipTap Editor
  • Code Highlighting: Prism.js + React Syntax Highlighter
  • Internationalization: React i18next Multi-language Support

๐Ÿ“ฆ Deployment Guide

Environment Requirements

  • Docker & Docker Compose

๐Ÿš€ Quick Start

1. Standard Deployment (Recommended)

# Clone the project
git clone https://github.com/AIDotNet/auto-prompt.git
cd auto-prompt

# Start service
docker-compose up -d

# Check service status
docker-compose ps

Access URL: http://localhost:10426

2. Custom API Endpoint Deployment

Create docker-compose.override.yaml file:

version: '3.8'

services:
  console-service:
    environment:
      # Custom AI API endpoint
      - OpenAIEndpoint=https://your-api-endpoint.com/v1
      # Available model configuration
      - CHAT_MODEL=gpt-4,gpt-3.5-turbo,claude-3-sonnet
      - DEFAULT_CHAT_MODEL=gpt-4
      - GenerationChatModel=gpt-4
# Start with custom configuration
docker-compose -f docker-compose.yaml -f docker-compose.override.yaml up -d

3. Local AI Service Deployment (Ollama)

Create docker-compose.ollama.yaml file:

version: '3.8'

services:
  console-service:
    image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
    ports:
      - "10426:8080"
    environment:
      - TZ=Asia/Shanghai
      - OpenAIEndpoint=http://ollama:11434/v1
      - CHAT_MODEL=qwen2.5-coder:32b,llama3.2:3b,gemma2:9b
      - DEFAULT_CHAT_MODEL=qwen2.5-coder:32b
      - GenerationChatModel=qwen2.5-coder:32b
      - ConnectionStrings:Type=sqlite
      - ConnectionStrings:Default=Data Source=/app/data/ConsoleService.db
    volumes:
      - ./data:/app/data
    depends_on:
      - ollama
    restart: unless-stopped

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0
    restart: unless-stopped
    # GPU support (if NVIDIA GPU available)
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]

volumes:
  ollama_data:

Start Ollama Version:

# Start service
docker-compose -f docker-compose-ollama.yaml up -d

# Pull recommended models
docker exec ollama ollama pull qwen3
docker exec ollama ollama pull qwen2.5:3b
docker exec ollama ollama pull llama3.2:3b

# Verify models
docker exec ollama ollama list
docker-compose restart console-service

๐Ÿš€ One-click Start Script:

To simplify the deployment process, we provide a one-click start script:

Linux/macOS Users:

# Add execution permission to script
chmod +x start-ollama.sh

# Run one-click start script
./start-ollama.sh

Windows Users:

# Directly run batch script
start-ollama.bat

Script Features:

  • ๐Ÿš€ Automatically start the ollama service and console service
  • โณ Wait for services to fully start
  • ๐Ÿ“ฆ Automatically pull the qwen3 model
  • โœ… Verify model installation status
  • ๐ŸŽ‰ Display access address upon completion

Recommended Models:

  • qwen3 - Excellent Chinese conversation effect (about 5GB)
  • qwen2.5:3b - Lightweight version (about 2GB)
  • llama3.2:3b - Good English conversation effect (about 2GB)
  • gemma2:9b - Google open-source model (about 5GB)

4. PostgreSQL Database Deployment

Create docker-compose.postgres.yaml file:

version: '3.8'

services:
  console-service:
    image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
    ports:
      - "10426:8080"
    environment:
      - TZ=Asia/Shanghai
      - OpenAIEndpoint=https://api.openai.com/v1
      - ConnectionStrings:Type=postgresql
      - ConnectionStrings:Default=Host=postgres;Database=auto_prompt;Username=postgres;Password=your_password
    depends_on:
      - postgres
    restart: unless-stopped

  postgres:
    image: postgres:16-alpine
    environment:
      - POSTGRES_DB=auto_prompt
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=your_password
      - TZ=Asia/Shanghai
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    restart: unless-stopped

volumes:
  postgres_data:

๐Ÿ” Default Account Information

  • Username: admin
  • Password: admin123

๐Ÿ”ง Environment Variable Configuration

Variable Name Description Default Value
OpenAIEndpoint AI API endpoint address https://api.token-ai.cn/v1
CHAT_MODEL Available chat model list gpt-4.1,o4-mini,claude-sonnet-4-20250514
DEFAULT_CHAT_MODEL Default chat model gpt-4.1-mini
DEFAULT_USERNAME Default admin username admin
DEFAULT_PASSWORD Default admin password admin123
ConnectionStrings:Type Database type sqlite

โšก Common Commands

# View logs
docker-compose logs -f console-service

# Restart service
docker-compose restart console-service

# Stop service
docker-compose down

# Update image
docker-compose pull && docker-compose up -d

๐Ÿ—๏ธ Project Structure

auto-prompt/
โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ Console.Service/          # Backend service
โ”‚       โ”œโ”€โ”€ Controllers/          # API controllers
โ”‚       โ”œโ”€โ”€ Services/             # Business services
โ”‚       โ”œโ”€โ”€ Entities/             # Data entities
โ”‚       โ”œโ”€โ”€ Dto/                  # Data transfer objects
โ”‚       โ”œโ”€โ”€ plugins/              # AI plugin configurations
โ”‚       โ””โ”€โ”€ Migrations/           # Database migrations
โ”œโ”€โ”€ web/                          # Frontend application
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ public/                   # Static resources
โ”œโ”€โ”€ docker-compose.yaml           # Docker orchestration configuration
โ””โ”€โ”€ README.md                     # Project documentation

๐ŸŽฎ Usage Guide

1. Prompt Optimization

  1. Enter the prompt you want to optimize in the workspace.
  2. Describe specific needs and expected effects.
  3. Choose whether to enable deep inference mode.
  4. Click "Generate" to start the optimization process.
  5. View optimization results and inference process.

2. Template Management

  1. Save optimized prompts as templates.
  2. Add titles, descriptions, and tags.
  3. Manage personal templates in "My Prompts."
  4. Supports editing, deleting, bookmarking, etc.

3. Community Sharing

  1. Browse popular templates in the Prompt Square.
  2. Use the search function to find specific types of templates.
  3. Like and bookmark templates of interest.
  4. Share your high-quality templates with the community.

4. Language Switching

  1. Click the language switch button (๐ŸŒ) in the top right corner or sidebar.
  2. Choose your preferred language (Chinese/English).
  3. The interface will switch languages immediately without refreshing the page.
  4. Your language preference will be saved and automatically applied next time you visit.

๐Ÿ“„ Open Source License

This project is licensed under the LGPL (Lesser General Public License).

License Terms

  • โœ… Commercial Use: Allowed to deploy and use in commercial environments.
  • โœ… Distribution: Allowed to distribute original code and binaries.
  • โœ… Modification: Allowed to modify source code for personal or internal use.
  • โŒ Commercial Distribution of Modified Code: Prohibited from distributing modified source code commercially.
  • โš ๏ธ Liability: Users assume the risk of using this software.

Important Notes

  • Can directly deploy this project for commercial use.
  • Can develop internal tools based on this project.
  • Cannot repack and distribute modified source code.
  • Must retain original copyright notice.

For detailed license terms, please refer to the LICENSE file.

๐Ÿ™ Acknowledgments

Thanks to the following open-source projects and technologies:

๐Ÿ“ž Contact Us


Star History

Star History Chart

๐Ÿ‘ฅ Contributors

Thanks to all the developers who contributed to this project!

๐Ÿ’ŒWeChat

image

If this project helps you, please give us a โญ Star!

Made with โค๏ธ by TokenAI

About

AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing compre

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published