A complete Docker-based system for generating and distributing software applications via email using Large Language Models (LLMs).
- AI-Powered Code Generation: Uses OpenAI, Anthropic, or local Ollama models
- Email Distribution: Automatically packages and sends generated applications via SMTP
- Multi-LLM Support: Switch between cloud and local LLM providers
- Webhook Integration: Trigger generation via HTTP webhooks
- Docker Containerization: Complete Docker-based deployment
- Monitoring: Built-in Prometheus metrics and Grafana dashboards
- Email Testing: Local MailHog server for development
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Webhook API │───▶│ LLM Generator │───▶│ SMTP Service │
│ (Port 9000) │ │ (Port 8000) │ │ (Port 5000) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Ollama LLM │ │ MailHog │
│ (Port 11434) │ │ (Port 8025) │
└─────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Redis │ │ Prometheus │
│ (Port 6379) │ │ (Port 9090) │
└─────────────────┘ └─────────────────┘
- Docker and Docker Compose
- At least 8GB RAM (for local LLM models)
- OpenAI or Anthropic API key (optional, for cloud LLMs)
# Clone repository
git clone <repository-url>
cd llm-email-distribution
# Setup environment
cp .env.example .env
# Edit .env with your configuration
# Run setup script
chmod +x scripts/setup.sh
./scripts/setup.sh
Edit .env
file with your settings:
# LLM Provider (openai, anthropic, or ollama)
LLM_PROVIDER=ollama
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
# SMTP Settings
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=your_app_password
SMTP_USE_TLS=true
# Security
API_TOKEN=your_secure_token_here
docker-compose up -d
curl -X POST http://localhost:8000/generate \
-H "Authorization: Bearer your_token" \
-H "Content-Type: application/json" \
-d '{
"app_type": "dashboard",
"description": "Analytics dashboard with charts",
"recipient_email": "[email protected]",
"tech_stack": ["python", "fastapi", "html"],
"features": ["responsive_design", "charts", "real_time_updates"]
}'
curl -X POST http://localhost:9000/webhook/generate \
-H "Content-Type: application/json" \
-d '{
"app_type": "api",
"description": "REST API with authentication",
"recipient_email": "[email protected]",
"tech_stack": ["python", "fastapi"],
"features": ["jwt_auth", "database", "swagger_docs"]
}'
curl -H "Authorization: Bearer your_token" \
http://localhost:8000/status/{request_id}
Service | Port | Description |
---|---|---|
LLM Generator | 8000 | Main API for code generation |
SMTP Service | 5000 | Email sending service |
Webhook Receiver | 9000 | Webhook handling |
MailHog UI | 8025 | Email testing interface |
Ollama | 11434 | Local LLM service |
Prometheus | 9090 | Metrics collection |
Grafana | 3000 | Monitoring dashboard |
Redis | 6379 | Caching and job queue |
Generated applications are sent as ZIP attachments containing:
- Source code files (main.py, templates, static files)
- Dockerfile for containerization
- requirements.txt or package.json
- README.md with setup instructions
- metadata.json with generation details
# Run system tests
chmod +x scripts/test-system.sh
./scripts/test-system.sh
# Send test email
curl -X POST http://localhost:5000/send-test \
-H "Content-Type: application/json" \
-d '{"recipient": "[email protected]"}'
# View sent emails
open http://localhost:8025
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin123)
- Application Logs:
docker-compose logs -f service_name
- Extend
LLMProvider
class inllm-generator/llm_providers.py
- Add provider configuration in
main.py
- Update environment variables
- Add Jinja2 templates in
templates/
directory - Reference in
code_generator.py
- Customize prompts for specific application types
- Modify
email_packager.py
for custom email formats - Add HTML templates in
email-templates/
- Customize attachment handling
- API Authentication: Use strong API tokens
- SMTP Security: Use app passwords, not account passwords
- Email Validation: Recipients are validated before sending
- Code Review: Generated code should be reviewed before production use
- Rate Limiting: Implement rate limiting for production deployments
-
Ollama model download fails
docker-compose exec ollama ollama pull codellama:7b-instruct
-
SMTP authentication errors
- Check app password configuration
- Verify SMTP settings in .env
-
Memory issues with local LLMs
- Reduce model size or increase Docker memory limits
- Switch to cloud LLM providers
-
Email delivery issues
- Check MailHog for local testing: http://localhost:8025
- Verify SMTP configuration and credentials
# View all logs
docker-compose logs
# Specific service logs
docker-compose logs -f llm-generator
docker-compose logs -f smtp-service
Apache License - see LICENSE file for details.
- Fork the repository
- Create feature branch
- Add tests for new features
- Submit pull request
- Create GitHub issues for bugs and feature requests
- Check logs for troubleshooting
- Review configuration in .env file
Email jako protokół dystrybucji oprogramowania generowanego przez AI to rewolucyjna koncepcja łącząca możliwości Large Language Models (LLM) z tradycyjną infrastrukturą email. Idea polega na automatycznej dystrybucji dynamicznie generowanego kodu/aplikacji bezpośrednio przez SMTP, wykorzystując email jako medium transportu i metadanych.
- LLM Generator: AI model generujący kod na żądanie
- SMTP Server: Serwer email jako kanał dystrybucji
- Webhook Interface: API do triggering generacji i wysyłki
- Metadata Packaging: Automatyczne tworzenie samorozpakowujących się pakietów
- Email Parsing: Automatyczne wyodrębnianie i wykonywanie załączników
Infrastruktura email jest uniwersalna:
- Każda organizacja ma już działający system email
- Brak potrzeby dodatkowych narzędzi deployment
- Naturalna kompatybilność z istniejącymi workflow
AI-driven personalizacja:
- Kod generowany on-demand na podstawie specyfikacji
- Dynamiczne dostosowanie do środowiska użytkownika
- Automatyczne uwzględnienie dependencies i konfiguracji
Asynchroniczna dystrybucja:
- Brak blocking operations podczas generacji
- Kolejkowanie requestów w SMTP queue
- Scalability przez distributed email servers
Audit trail i wersjonowanie:
- Naturalny system logowania przez email history
- Możliwość rollback przez resend starszych wersji
- Compliance z corporate email policies
Zero-dependency deployment:
- Brak potrzeby CI/CD pipeline'ów
- Nie wymaga VPN ani internal network access
- Działa przez firewall restrictions
Ograniczenia bezpieczeństwa:
- Email nie jest medium zaprojektowanym dla executables
- Trudność w code signing i verification
- Podatność na email interception
Problemy ze skalowalnością:
- Email attachment size limits (zazwyczaj 25-50MB)
- SMTP delivery delays i retry mechanisms
- Brak real-time feedback o deployment status
Złożoność debugowania:
- Trudność w śledzeniu błędów deployment
- Ograniczone logging capabilities
- Problemy z dependency resolution
Compliance i audit issues:
- Potencjalne konflikty z corporate IT policies
- Trudności w change management tracking
- Legal issues z automated code distribution
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ User Request │───▶│ LLM Generator │───▶│ SMTP Gateway │
│ (Webhook/API) │ │ (Code Gen) │ │ (Email Send) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Metadata │ │ User Inbox │
│ Packaging │ │ (Receive) │
└─────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ EML Creation │ │ Auto Extract │
│ (Self-Extract) │ │ (Execute) │
└─────────────────┘ └─────────────────┘
- Request initiation: Webhook lub API call z parametrami aplikacji
- LLM Processing: AI generuje kod bazując na input parameters
- Metadata enrichment: Automatyczne dodawanie dependencies, configs
- EML Packaging: Tworzenie samorozpakowującego się email archive
- SMTP Delivery: Wysyłka przez konfigurowany SMTP server
- Client Reception: Otrzymanie i automatyczne przetworzenie
- Execution: Uruchomienie aplikacji w target environment
Inbound webhooks (triggering generacji):
{
"app_type": "dashboard",
"requirements": ["Python", "FastAPI", "Docker"],
"recipient": "[email protected]",
"parameters": {
"database": "PostgreSQL",
"auth": "OAuth2",
"deployment": "containerized"
}
}
Outbound webhooks (status notifications):
{
"status": "email_sent",
"request_id": "req_12345",
"recipient": "[email protected]",
"timestamp": "2025-06-19T10:30:00Z",
"tracking_id": "email_67890"
}
- Automatyczne generowanie admin dashboards
- Custom reporting applications
- One-off automation scripts dla specific tasks
- Personalized demos dla sales presentations
- Custom integrations dla client environments
- Proof-of-concept applications
- Hotfix distribution gdy CI/CD is down
- Disaster recovery tools
- Quick patches dla critical systems
- Personalized learning environments
- Custom exercise generators
- Development environment setup
Model selection criteria:
- Code generation capabilities (Python, JavaScript, Docker)
- Support for structured output (JSON metadata)
- Rate limiting i cost considerations
- Local vs. cloud deployment options
Prompt engineering patterns:
GENERATION_PROMPT = """
Generate a complete {app_type} application with the following requirements:
- Technology stack: {tech_stack}
- Deployment target: {deployment_target}
- Features: {features}
Include:
1. Complete source code
2. Dockerfile dla containerization
3. Deployment instructions
4. Configuration files
5. Basic tests
Output as JSON with file paths and contents.
"""
Authentication i security:
- OAuth2 dla Gmail/Office365 integration
- SMTP-AUTH dla dedicated servers
- TLS encryption dla all communications
- Rate limiting dla abuse prevention
Delivery optimization:
- Queue management dla bulk operations
- Retry logic dla failed deliveries
- Monitoring i alerting dla SMTP health
- Load balancing across multiple SMTP servers
MIME structure optimization:
multipart/mixed
├── text/plain (human readable summary)
├── text/html (rich formatted instructions)
├── application/octet-stream (source_code.zip)
├── application/json (metadata.json)
└── text/x-dockerfile (Dockerfile)
Metadata standardization:
{
"version": "1.0",
"generated_at": "2025-06-19T10:30:00Z",
"llm_model": "gpt-4",
"request_id": "req_12345",
"app_metadata": {
"name": "Custom Dashboard",
"type": "web_application",
"runtime": "python:3.11",
"dependencies": ["fastapi", "uvicorn", "pydantic"]
},
"deployment": {
"method": "docker",
"port": 8080,
"environment_vars": ["DATABASE_URL", "SECRET_KEY"]
},
"execution_instructions": [
"docker build -t custom-dashboard .",
"docker run -p 8080:8080 custom-dashboard"
]
}
Aspekt | Email Distribution | GitHub Actions | Docker Registry | Package Managers |
---|---|---|---|---|
Setup Complexity | Niski | Średni | Średni | Wysoki |
Infrastructure Deps | Email only | Git + CI/CD | Registry server | Package repos |
Real-time Feedback | Ograniczony | Excellent | Good | Good |
Security | Podstawowy | Strong | Strong | Excellent |
Versioning | Email history | Git-based | Tag-based | Semantic versioning |
Rollback | Manual resend | Automated | Tag switching | Version downgrade |
Enterprise Integration | Native | Good | Good | Excellent |
Debugging | Limited | Excellent | Good | Good |
System składa się z trzech głównych komponentów:
- REST API dla request handling
- LLM integration (OpenAI/Anthropic/Local)
- Template management system
- Code validation i testing
- SMTP server integration
- Email template generation
- Attachment handling
- Delivery tracking
- Email parsing utilities
- Automatic extraction scripts
- Execution wrappers
- Status reporting hooks
Email-based AI software distribution to interesująca koncepcja dla specific use cases, ale nie zastąpi tradycyjnych methods dla production systems.
Zalecane zastosowania:
- Prototyping i rapid development
- Internal tool distribution w małych teams
- Emergency deployment scenarios
- Educational i training environments
Nie zalecane dla:
- Production deployment systems
- Security-critical applications
- High-frequency update cycles
- Applications wymagające complex dependency management
Kluczowe success factors:
- Strong email infrastructure
- Proper security protocols
- Clear governance policies
- Comprehensive monitoring
- User education i training
System może być valuable addition do developer toolkit, ale should complement, not replace, established deployment methodologies.