Skip to content

emllm/distribution

Repository files navigation

LLM Email Distribution System

A complete Docker-based system for generating and distributing software applications via email using Large Language Models (LLMs).

🌟 Features

  • AI-Powered Code Generation: Uses OpenAI, Anthropic, or local Ollama models
  • Email Distribution: Automatically packages and sends generated applications via SMTP
  • Multi-LLM Support: Switch between cloud and local LLM providers
  • Webhook Integration: Trigger generation via HTTP webhooks
  • Docker Containerization: Complete Docker-based deployment
  • Monitoring: Built-in Prometheus metrics and Grafana dashboards
  • Email Testing: Local MailHog server for development

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Webhook API    │───▢│  LLM Generator  │───▢│  SMTP Service   β”‚
β”‚  (Port 9000)    β”‚    β”‚  (Port 8000)    β”‚    β”‚  (Port 5000)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β”‚                       β”‚
                                β–Ό                       β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚   Ollama LLM    β”‚    β”‚    MailHog      β”‚
                       β”‚  (Port 11434)   β”‚    β”‚  (Port 8025)    β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β”‚                       β”‚
                                β–Ό                       β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚     Redis       β”‚    β”‚   Prometheus    β”‚
                       β”‚  (Port 6379)    β”‚    β”‚  (Port 9090)    β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

1. Prerequisites

  • Docker and Docker Compose
  • At least 8GB RAM (for local LLM models)
  • OpenAI or Anthropic API key (optional, for cloud LLMs)

2. Setup

# Clone repository
git clone <repository-url>
cd llm-email-distribution

# Setup environment
cp .env.example .env
# Edit .env with your configuration

# Run setup script
chmod +x scripts/setup.sh
./scripts/setup.sh

3. Configuration

Edit .env file with your settings:

# LLM Provider (openai, anthropic, or ollama)
LLM_PROVIDER=ollama
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here

# SMTP Settings
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=your_app_password
SMTP_USE_TLS=true

# Security
API_TOKEN=your_secure_token_here

4. Start Services

docker-compose up -d

πŸ“– Usage

Generate Application via API

curl -X POST http://localhost:8000/generate \
  -H "Authorization: Bearer your_token" \
  -H "Content-Type: application/json" \
  -d '{
    "app_type": "dashboard",
    "description": "Analytics dashboard with charts",
    "recipient_email": "[email protected]",
    "tech_stack": ["python", "fastapi", "html"],
    "features": ["responsive_design", "charts", "real_time_updates"]
  }'

Generate via Webhook

curl -X POST http://localhost:9000/webhook/generate \
  -H "Content-Type: application/json" \
  -d '{
    "app_type": "api",
    "description": "REST API with authentication",
    "recipient_email": "[email protected]",
    "tech_stack": ["python", "fastapi"],
    "features": ["jwt_auth", "database", "swagger_docs"]
  }'

Check Generation Status

curl -H "Authorization: Bearer your_token" \
  http://localhost:8000/status/{request_id}

πŸ”§ Services

Service Port Description
LLM Generator 8000 Main API for code generation
SMTP Service 5000 Email sending service
Webhook Receiver 9000 Webhook handling
MailHog UI 8025 Email testing interface
Ollama 11434 Local LLM service
Prometheus 9090 Metrics collection
Grafana 3000 Monitoring dashboard
Redis 6379 Caching and job queue

πŸ“§ Email Format

Generated applications are sent as ZIP attachments containing:

  • Source code files (main.py, templates, static files)
  • Dockerfile for containerization
  • requirements.txt or package.json
  • README.md with setup instructions
  • metadata.json with generation details

πŸ§ͺ Testing

# Run system tests
chmod +x scripts/test-system.sh
./scripts/test-system.sh

# Send test email
curl -X POST http://localhost:5000/send-test \
  -H "Content-Type: application/json" \
  -d '{"recipient": "[email protected]"}'

# View sent emails
open http://localhost:8025

πŸ“Š Monitoring

πŸ› οΈ Development

Adding New LLM Providers

  1. Extend LLMProvider class in llm-generator/llm_providers.py
  2. Add provider configuration in main.py
  3. Update environment variables

Custom Templates

  1. Add Jinja2 templates in templates/ directory
  2. Reference in code_generator.py
  3. Customize prompts for specific application types

Email Templates

  1. Modify email_packager.py for custom email formats
  2. Add HTML templates in email-templates/
  3. Customize attachment handling

πŸ”’ Security Considerations

  • API Authentication: Use strong API tokens
  • SMTP Security: Use app passwords, not account passwords
  • Email Validation: Recipients are validated before sending
  • Code Review: Generated code should be reviewed before production use
  • Rate Limiting: Implement rate limiting for production deployments

πŸ› Troubleshooting

Common Issues

  1. Ollama model download fails

    docker-compose exec ollama ollama pull codellama:7b-instruct
  2. SMTP authentication errors

    • Check app password configuration
    • Verify SMTP settings in .env
  3. Memory issues with local LLMs

    • Reduce model size or increase Docker memory limits
    • Switch to cloud LLM providers
  4. Email delivery issues

Logs

# View all logs
docker-compose logs

# Specific service logs
docker-compose logs -f llm-generator
docker-compose logs -f smtp-service

πŸ“ License

Apache License - see LICENSE file for details.

🀝 Contributing

  1. Fork the repository
  2. Create feature branch
  3. Add tests for new features
  4. Submit pull request

πŸ“ž Support

  • Create GitHub issues for bugs and feature requests
  • Check logs for troubleshooting
  • Review configuration in .env file

AI LLM Email Distribution: Analiza koncepcji i implementacji

Wprowadzenie do koncepcji

Email jako protokΓ³Ε‚ dystrybucji oprogramowania generowanego przez AI to rewolucyjna koncepcja Ε‚Δ…czΔ…ca moΕΌliwoΕ›ci Large Language Models (LLM) z tradycyjnΔ… infrastrukturΔ… email. Idea polega na automatycznej dystrybucji dynamicznie generowanego kodu/aplikacji bezpoΕ›rednio przez SMTP, wykorzystujΔ…c email jako medium transportu i metadanych.

Kluczowe elementy systemu:

  • LLM Generator: AI model generujΔ…cy kod na ΕΌΔ…danie
  • SMTP Server: Serwer email jako kanaΕ‚ dystrybucji
  • Webhook Interface: API do triggering generacji i wysyΕ‚ki
  • Metadata Packaging: Automatyczne tworzenie samorozpakowujΔ…cych siΔ™ pakietΓ³w
  • Email Parsing: Automatyczne wyodrΔ™bnianie i wykonywanie zaΕ‚Δ…cznikΓ³w

Wady i zalety modelu

βœ… Zalety

Infrastruktura email jest uniwersalna:

  • KaΕΌda organizacja ma juΕΌ dziaΕ‚ajΔ…cy system email
  • Brak potrzeby dodatkowych narzΔ™dzi deployment
  • Naturalna kompatybilnoΕ›Δ‡ z istniejΔ…cymi workflow

AI-driven personalizacja:

  • Kod generowany on-demand na podstawie specyfikacji
  • Dynamiczne dostosowanie do Ε›rodowiska uΕΌytkownika
  • Automatyczne uwzglΔ™dnienie dependencies i konfiguracji

Asynchroniczna dystrybucja:

  • Brak blocking operations podczas generacji
  • Kolejkowanie requestΓ³w w SMTP queue
  • Scalability przez distributed email servers

Audit trail i wersjonowanie:

  • Naturalny system logowania przez email history
  • MoΕΌliwoΕ›Δ‡ rollback przez resend starszych wersji
  • Compliance z corporate email policies

Zero-dependency deployment:

  • Brak potrzeby CI/CD pipeline'Γ³w
  • Nie wymaga VPN ani internal network access
  • DziaΕ‚a przez firewall restrictions

❌ Wady

Ograniczenia bezpieczeΕ„stwa:

  • Email nie jest medium zaprojektowanym dla executables
  • TrudnoΕ›Δ‡ w code signing i verification
  • PodatnoΕ›Δ‡ na email interception

Problemy ze skalowalnoΕ›ciΔ…:

  • Email attachment size limits (zazwyczaj 25-50MB)
  • SMTP delivery delays i retry mechanisms
  • Brak real-time feedback o deployment status

ZΕ‚oΕΌonoΕ›Δ‡ debugowania:

  • TrudnoΕ›Δ‡ w Ε›ledzeniu bΕ‚Δ™dΓ³w deployment
  • Ograniczone logging capabilities
  • Problemy z dependency resolution

Compliance i audit issues:

  • Potencjalne konflikty z corporate IT policies
  • TrudnoΕ›ci w change management tracking
  • Legal issues z automated code distribution

SposΓ³b dystrybucji w praktyce

Architektura systemu

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   User Request  │───▢│   LLM Generator │───▢│  SMTP Gateway   β”‚
β”‚  (Webhook/API)  β”‚    β”‚   (Code Gen)    β”‚    β”‚   (Email Send)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β”‚                       β”‚
                                β–Ό                       β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚   Metadata      β”‚    β”‚   User Inbox    β”‚
                       β”‚   Packaging     β”‚    β”‚   (Receive)     β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β”‚                       β”‚
                                β–Ό                       β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚  EML Creation   β”‚    β”‚  Auto Extract   β”‚
                       β”‚  (Self-Extract) β”‚    β”‚   (Execute)     β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Flow procesu:

  1. Request initiation: Webhook lub API call z parametrami aplikacji
  2. LLM Processing: AI generuje kod bazujΔ…c na input parameters
  3. Metadata enrichment: Automatyczne dodawanie dependencies, configs
  4. EML Packaging: Tworzenie samorozpakowujΔ…cego siΔ™ email archive
  5. SMTP Delivery: WysyΕ‚ka przez konfigurowany SMTP server
  6. Client Reception: Otrzymanie i automatyczne przetworzenie
  7. Execution: Uruchomienie aplikacji w target environment

Wykorzystanie webhookΓ³w:

Inbound webhooks (triggering generacji):

{
  "app_type": "dashboard",
  "requirements": ["Python", "FastAPI", "Docker"],
  "recipient": "[email protected]",
  "parameters": {
    "database": "PostgreSQL",
    "auth": "OAuth2",
    "deployment": "containerized"
  }
}

Outbound webhooks (status notifications):

{
  "status": "email_sent",
  "request_id": "req_12345",
  "recipient": "[email protected]",
  "timestamp": "2025-06-19T10:30:00Z",
  "tracking_id": "email_67890"
}

PrzykΕ‚ady zastosowaΕ„

1. Enterprise Internal Tools

  • Automatyczne generowanie admin dashboards
  • Custom reporting applications
  • One-off automation scripts dla specific tasks

2. Client Deliverables

  • Personalized demos dla sales presentations
  • Custom integrations dla client environments
  • Proof-of-concept applications

3. Emergency Deployments

  • Hotfix distribution gdy CI/CD is down
  • Disaster recovery tools
  • Quick patches dla critical systems

4. Training i Development

  • Personalized learning environments
  • Custom exercise generators
  • Development environment setup

Techniczne aspekty implementacji

LLM Integration considerations:

Model selection criteria:

  • Code generation capabilities (Python, JavaScript, Docker)
  • Support for structured output (JSON metadata)
  • Rate limiting i cost considerations
  • Local vs. cloud deployment options

Prompt engineering patterns:

GENERATION_PROMPT = """
Generate a complete {app_type} application with the following requirements:
- Technology stack: {tech_stack}
- Deployment target: {deployment_target}
- Features: {features}

Include:
1. Complete source code
2. Dockerfile dla containerization
3. Deployment instructions
4. Configuration files
5. Basic tests

Output as JSON with file paths and contents.
"""

SMTP Server considerations:

Authentication i security:

  • OAuth2 dla Gmail/Office365 integration
  • SMTP-AUTH dla dedicated servers
  • TLS encryption dla all communications
  • Rate limiting dla abuse prevention

Delivery optimization:

  • Queue management dla bulk operations
  • Retry logic dla failed deliveries
  • Monitoring i alerting dla SMTP health
  • Load balancing across multiple SMTP servers

Email formatting strategies:

MIME structure optimization:

multipart/mixed
β”œβ”€β”€ text/plain (human readable summary)
β”œβ”€β”€ text/html (rich formatted instructions)
β”œβ”€β”€ application/octet-stream (source_code.zip)
β”œβ”€β”€ application/json (metadata.json)
└── text/x-dockerfile (Dockerfile)

Metadata standardization:

{
  "version": "1.0",
  "generated_at": "2025-06-19T10:30:00Z",
  "llm_model": "gpt-4",
  "request_id": "req_12345",
  "app_metadata": {
    "name": "Custom Dashboard",
    "type": "web_application",
    "runtime": "python:3.11",
    "dependencies": ["fastapi", "uvicorn", "pydantic"]
  },
  "deployment": {
    "method": "docker",
    "port": 8080,
    "environment_vars": ["DATABASE_URL", "SECRET_KEY"]
  },
  "execution_instructions": [
    "docker build -t custom-dashboard .",
    "docker run -p 8080:8080 custom-dashboard"
  ]
}

PorΓ³wnanie z alternatywnymi rozwiΔ…zaniami

Aspekt Email Distribution GitHub Actions Docker Registry Package Managers
Setup Complexity Niski Średni Średni Wysoki
Infrastructure Deps Email only Git + CI/CD Registry server Package repos
Real-time Feedback Ograniczony Excellent Good Good
Security Podstawowy Strong Strong Excellent
Versioning Email history Git-based Tag-based Semantic versioning
Rollback Manual resend Automated Tag switching Version downgrade
Enterprise Integration Native Good Good Excellent
Debugging Limited Excellent Good Good

Implementacja referencyjna

System skΕ‚ada siΔ™ z trzech gΕ‚Γ³wnych komponentΓ³w:

1. AI Code Generator Service

  • REST API dla request handling
  • LLM integration (OpenAI/Anthropic/Local)
  • Template management system
  • Code validation i testing

2. Email Distribution Service

  • SMTP server integration
  • Email template generation
  • Attachment handling
  • Delivery tracking

3. Client Integration Tools

  • Email parsing utilities
  • Automatic extraction scripts
  • Execution wrappers
  • Status reporting hooks

Wnioski i rekomendacje

Email-based AI software distribution to interesujΔ…ca koncepcja dla specific use cases, ale nie zastΔ…pi tradycyjnych methods dla production systems.

Zalecane zastosowania:

  • Prototyping i rapid development
  • Internal tool distribution w maΕ‚ych teams
  • Emergency deployment scenarios
  • Educational i training environments

Nie zalecane dla:

  • Production deployment systems
  • Security-critical applications
  • High-frequency update cycles
  • Applications wymagajΔ…ce complex dependency management

Kluczowe success factors:

  • Strong email infrastructure
  • Proper security protocols
  • Clear governance policies
  • Comprehensive monitoring
  • User education i training

System może być valuable addition do developer toolkit, ale should complement, not replace, established deployment methodologies.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published