Skip to content

emllm/distribution

Repository files navigation

LLM Email Distribution System

A complete Docker-based system for generating and distributing software applications via email using Large Language Models (LLMs).

🌟 Features

  • AI-Powered Code Generation: Uses OpenAI, Anthropic, or local Ollama models
  • Email Distribution: Automatically packages and sends generated applications via SMTP
  • Multi-LLM Support: Switch between cloud and local LLM providers
  • Webhook Integration: Trigger generation via HTTP webhooks
  • Docker Containerization: Complete Docker-based deployment
  • Monitoring: Built-in Prometheus metrics and Grafana dashboards
  • Email Testing: Local MailHog server for development

🏗️ Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  Webhook API    │───▶│  LLM Generator  │───▶│  SMTP Service   │
│  (Port 9000)    │    │  (Port 8000)    │    │  (Port 5000)    │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                                │                       │
                                ▼                       ▼
                       ┌─────────────────┐    ┌─────────────────┐
                       │   Ollama LLM    │    │    MailHog      │
                       │  (Port 11434)   │    │  (Port 8025)    │
                       └─────────────────┘    └─────────────────┘
                                │                       │
                                ▼                       ▼
                       ┌─────────────────┐    ┌─────────────────┐
                       │     Redis       │    │   Prometheus    │
                       │  (Port 6379)    │    │  (Port 9090)    │
                       └─────────────────┘    └─────────────────┘

🚀 Quick Start

1. Prerequisites

  • Docker and Docker Compose
  • At least 8GB RAM (for local LLM models)
  • OpenAI or Anthropic API key (optional, for cloud LLMs)

2. Setup

# Clone repository
git clone <repository-url>
cd llm-email-distribution

# Setup environment
cp .env.example .env
# Edit .env with your configuration

# Run setup script
chmod +x scripts/setup.sh
./scripts/setup.sh

3. Configuration

Edit .env file with your settings:

# LLM Provider (openai, anthropic, or ollama)
LLM_PROVIDER=ollama
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here

# SMTP Settings
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=your_app_password
SMTP_USE_TLS=true

# Security
API_TOKEN=your_secure_token_here

4. Start Services

docker-compose up -d

📖 Usage

Generate Application via API

curl -X POST http://localhost:8000/generate \
  -H "Authorization: Bearer your_token" \
  -H "Content-Type: application/json" \
  -d '{
    "app_type": "dashboard",
    "description": "Analytics dashboard with charts",
    "recipient_email": "[email protected]",
    "tech_stack": ["python", "fastapi", "html"],
    "features": ["responsive_design", "charts", "real_time_updates"]
  }'

Generate via Webhook

curl -X POST http://localhost:9000/webhook/generate \
  -H "Content-Type: application/json" \
  -d '{
    "app_type": "api",
    "description": "REST API with authentication",
    "recipient_email": "[email protected]",
    "tech_stack": ["python", "fastapi"],
    "features": ["jwt_auth", "database", "swagger_docs"]
  }'

Check Generation Status

curl -H "Authorization: Bearer your_token" \
  http://localhost:8000/status/{request_id}

🔧 Services

Service Port Description
LLM Generator 8000 Main API for code generation
SMTP Service 5000 Email sending service
Webhook Receiver 9000 Webhook handling
MailHog UI 8025 Email testing interface
Ollama 11434 Local LLM service
Prometheus 9090 Metrics collection
Grafana 3000 Monitoring dashboard
Redis 6379 Caching and job queue

📧 Email Format

Generated applications are sent as ZIP attachments containing:

  • Source code files (main.py, templates, static files)
  • Dockerfile for containerization
  • requirements.txt or package.json
  • README.md with setup instructions
  • metadata.json with generation details

🧪 Testing

# Run system tests
chmod +x scripts/test-system.sh
./scripts/test-system.sh

# Send test email
curl -X POST http://localhost:5000/send-test \
  -H "Content-Type: application/json" \
  -d '{"recipient": "[email protected]"}'

# View sent emails
open http://localhost:8025

📊 Monitoring

🛠️ Development

Adding New LLM Providers

  1. Extend LLMProvider class in llm-generator/llm_providers.py
  2. Add provider configuration in main.py
  3. Update environment variables

Custom Templates

  1. Add Jinja2 templates in templates/ directory
  2. Reference in code_generator.py
  3. Customize prompts for specific application types

Email Templates

  1. Modify email_packager.py for custom email formats
  2. Add HTML templates in email-templates/
  3. Customize attachment handling

🔒 Security Considerations

  • API Authentication: Use strong API tokens
  • SMTP Security: Use app passwords, not account passwords
  • Email Validation: Recipients are validated before sending
  • Code Review: Generated code should be reviewed before production use
  • Rate Limiting: Implement rate limiting for production deployments

🐛 Troubleshooting

Common Issues

  1. Ollama model download fails

    docker-compose exec ollama ollama pull codellama:7b-instruct
  2. SMTP authentication errors

    • Check app password configuration
    • Verify SMTP settings in .env
  3. Memory issues with local LLMs

    • Reduce model size or increase Docker memory limits
    • Switch to cloud LLM providers
  4. Email delivery issues

Logs

# View all logs
docker-compose logs

# Specific service logs
docker-compose logs -f llm-generator
docker-compose logs -f smtp-service

📝 License

Apache License - see LICENSE file for details.

🤝 Contributing

  1. Fork the repository
  2. Create feature branch
  3. Add tests for new features
  4. Submit pull request

📞 Support

  • Create GitHub issues for bugs and feature requests
  • Check logs for troubleshooting
  • Review configuration in .env file

AI LLM Email Distribution: Analiza koncepcji i implementacji

Wprowadzenie do koncepcji

Email jako protokół dystrybucji oprogramowania generowanego przez AI to rewolucyjna koncepcja łącząca możliwości Large Language Models (LLM) z tradycyjną infrastrukturą email. Idea polega na automatycznej dystrybucji dynamicznie generowanego kodu/aplikacji bezpośrednio przez SMTP, wykorzystując email jako medium transportu i metadanych.

Kluczowe elementy systemu:

  • LLM Generator: AI model generujący kod na żądanie
  • SMTP Server: Serwer email jako kanał dystrybucji
  • Webhook Interface: API do triggering generacji i wysyłki
  • Metadata Packaging: Automatyczne tworzenie samorozpakowujących się pakietów
  • Email Parsing: Automatyczne wyodrębnianie i wykonywanie załączników

Wady i zalety modelu

Zalety

Infrastruktura email jest uniwersalna:

  • Każda organizacja ma już działający system email
  • Brak potrzeby dodatkowych narzędzi deployment
  • Naturalna kompatybilność z istniejącymi workflow

AI-driven personalizacja:

  • Kod generowany on-demand na podstawie specyfikacji
  • Dynamiczne dostosowanie do środowiska użytkownika
  • Automatyczne uwzględnienie dependencies i konfiguracji

Asynchroniczna dystrybucja:

  • Brak blocking operations podczas generacji
  • Kolejkowanie requestów w SMTP queue
  • Scalability przez distributed email servers

Audit trail i wersjonowanie:

  • Naturalny system logowania przez email history
  • Możliwość rollback przez resend starszych wersji
  • Compliance z corporate email policies

Zero-dependency deployment:

  • Brak potrzeby CI/CD pipeline'ów
  • Nie wymaga VPN ani internal network access
  • Działa przez firewall restrictions

Wady

Ograniczenia bezpieczeństwa:

  • Email nie jest medium zaprojektowanym dla executables
  • Trudność w code signing i verification
  • Podatność na email interception

Problemy ze skalowalnością:

  • Email attachment size limits (zazwyczaj 25-50MB)
  • SMTP delivery delays i retry mechanisms
  • Brak real-time feedback o deployment status

Złożoność debugowania:

  • Trudność w śledzeniu błędów deployment
  • Ograniczone logging capabilities
  • Problemy z dependency resolution

Compliance i audit issues:

  • Potencjalne konflikty z corporate IT policies
  • Trudności w change management tracking
  • Legal issues z automated code distribution

Sposób dystrybucji w praktyce

Architektura systemu

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   User Request  │───▶│   LLM Generator │───▶│  SMTP Gateway   │
│  (Webhook/API)  │    │   (Code Gen)    │    │   (Email Send)  │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                                │                       │
                                ▼                       ▼
                       ┌─────────────────┐    ┌─────────────────┐
                       │   Metadata      │    │   User Inbox    │
                       │   Packaging     │    │   (Receive)     │
                       └─────────────────┘    └─────────────────┘
                                │                       │
                                ▼                       ▼
                       ┌─────────────────┐    ┌─────────────────┐
                       │  EML Creation   │    │  Auto Extract   │
                       │  (Self-Extract) │    │   (Execute)     │
                       └─────────────────┘    └─────────────────┘

Flow procesu:

  1. Request initiation: Webhook lub API call z parametrami aplikacji
  2. LLM Processing: AI generuje kod bazując na input parameters
  3. Metadata enrichment: Automatyczne dodawanie dependencies, configs
  4. EML Packaging: Tworzenie samorozpakowującego się email archive
  5. SMTP Delivery: Wysyłka przez konfigurowany SMTP server
  6. Client Reception: Otrzymanie i automatyczne przetworzenie
  7. Execution: Uruchomienie aplikacji w target environment

Wykorzystanie webhooków:

Inbound webhooks (triggering generacji):

{
  "app_type": "dashboard",
  "requirements": ["Python", "FastAPI", "Docker"],
  "recipient": "[email protected]",
  "parameters": {
    "database": "PostgreSQL",
    "auth": "OAuth2",
    "deployment": "containerized"
  }
}

Outbound webhooks (status notifications):

{
  "status": "email_sent",
  "request_id": "req_12345",
  "recipient": "[email protected]",
  "timestamp": "2025-06-19T10:30:00Z",
  "tracking_id": "email_67890"
}

Przykłady zastosowań

1. Enterprise Internal Tools

  • Automatyczne generowanie admin dashboards
  • Custom reporting applications
  • One-off automation scripts dla specific tasks

2. Client Deliverables

  • Personalized demos dla sales presentations
  • Custom integrations dla client environments
  • Proof-of-concept applications

3. Emergency Deployments

  • Hotfix distribution gdy CI/CD is down
  • Disaster recovery tools
  • Quick patches dla critical systems

4. Training i Development

  • Personalized learning environments
  • Custom exercise generators
  • Development environment setup

Techniczne aspekty implementacji

LLM Integration considerations:

Model selection criteria:

  • Code generation capabilities (Python, JavaScript, Docker)
  • Support for structured output (JSON metadata)
  • Rate limiting i cost considerations
  • Local vs. cloud deployment options

Prompt engineering patterns:

GENERATION_PROMPT = """
Generate a complete {app_type} application with the following requirements:
- Technology stack: {tech_stack}
- Deployment target: {deployment_target}
- Features: {features}

Include:
1. Complete source code
2. Dockerfile dla containerization
3. Deployment instructions
4. Configuration files
5. Basic tests

Output as JSON with file paths and contents.
"""

SMTP Server considerations:

Authentication i security:

  • OAuth2 dla Gmail/Office365 integration
  • SMTP-AUTH dla dedicated servers
  • TLS encryption dla all communications
  • Rate limiting dla abuse prevention

Delivery optimization:

  • Queue management dla bulk operations
  • Retry logic dla failed deliveries
  • Monitoring i alerting dla SMTP health
  • Load balancing across multiple SMTP servers

Email formatting strategies:

MIME structure optimization:

multipart/mixed
├── text/plain (human readable summary)
├── text/html (rich formatted instructions)
├── application/octet-stream (source_code.zip)
├── application/json (metadata.json)
└── text/x-dockerfile (Dockerfile)

Metadata standardization:

{
  "version": "1.0",
  "generated_at": "2025-06-19T10:30:00Z",
  "llm_model": "gpt-4",
  "request_id": "req_12345",
  "app_metadata": {
    "name": "Custom Dashboard",
    "type": "web_application",
    "runtime": "python:3.11",
    "dependencies": ["fastapi", "uvicorn", "pydantic"]
  },
  "deployment": {
    "method": "docker",
    "port": 8080,
    "environment_vars": ["DATABASE_URL", "SECRET_KEY"]
  },
  "execution_instructions": [
    "docker build -t custom-dashboard .",
    "docker run -p 8080:8080 custom-dashboard"
  ]
}

Porównanie z alternatywnymi rozwiązaniami

Aspekt Email Distribution GitHub Actions Docker Registry Package Managers
Setup Complexity Niski Średni Średni Wysoki
Infrastructure Deps Email only Git + CI/CD Registry server Package repos
Real-time Feedback Ograniczony Excellent Good Good
Security Podstawowy Strong Strong Excellent
Versioning Email history Git-based Tag-based Semantic versioning
Rollback Manual resend Automated Tag switching Version downgrade
Enterprise Integration Native Good Good Excellent
Debugging Limited Excellent Good Good

Implementacja referencyjna

System składa się z trzech głównych komponentów:

1. AI Code Generator Service

  • REST API dla request handling
  • LLM integration (OpenAI/Anthropic/Local)
  • Template management system
  • Code validation i testing

2. Email Distribution Service

  • SMTP server integration
  • Email template generation
  • Attachment handling
  • Delivery tracking

3. Client Integration Tools

  • Email parsing utilities
  • Automatic extraction scripts
  • Execution wrappers
  • Status reporting hooks

Wnioski i rekomendacje

Email-based AI software distribution to interesująca koncepcja dla specific use cases, ale nie zastąpi tradycyjnych methods dla production systems.

Zalecane zastosowania:

  • Prototyping i rapid development
  • Internal tool distribution w małych teams
  • Emergency deployment scenarios
  • Educational i training environments

Nie zalecane dla:

  • Production deployment systems
  • Security-critical applications
  • High-frequency update cycles
  • Applications wymagające complex dependency management

Kluczowe success factors:

  • Strong email infrastructure
  • Proper security protocols
  • Clear governance policies
  • Comprehensive monitoring
  • User education i training

System może być valuable addition do developer toolkit, ale should complement, not replace, established deployment methodologies.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published