A complete Docker-based system for generating and distributing software applications via email using Large Language Models (LLMs).
- AI-Powered Code Generation: Uses OpenAI, Anthropic, or local Ollama models
- Email Distribution: Automatically packages and sends generated applications via SMTP
- Multi-LLM Support: Switch between cloud and local LLM providers
- Webhook Integration: Trigger generation via HTTP webhooks
- Docker Containerization: Complete Docker-based deployment
- Monitoring: Built-in Prometheus metrics and Grafana dashboards
- Email Testing: Local MailHog server for development
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Webhook API βββββΆβ LLM Generator βββββΆβ SMTP Service β
β (Port 9000) β β (Port 8000) β β (Port 5000) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β Ollama LLM β β MailHog β
β (Port 11434) β β (Port 8025) β
βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β Redis β β Prometheus β
β (Port 6379) β β (Port 9090) β
βββββββββββββββββββ βββββββββββββββββββ
- Docker and Docker Compose
- At least 8GB RAM (for local LLM models)
- OpenAI or Anthropic API key (optional, for cloud LLMs)
# Clone repository
git clone <repository-url>
cd llm-email-distribution
# Setup environment
cp .env.example .env
# Edit .env with your configuration
# Run setup script
chmod +x scripts/setup.sh
./scripts/setup.sh
Edit .env
file with your settings:
# LLM Provider (openai, anthropic, or ollama)
LLM_PROVIDER=ollama
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
# SMTP Settings
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=your_app_password
SMTP_USE_TLS=true
# Security
API_TOKEN=your_secure_token_here
docker-compose up -d
curl -X POST http://localhost:8000/generate \
-H "Authorization: Bearer your_token" \
-H "Content-Type: application/json" \
-d '{
"app_type": "dashboard",
"description": "Analytics dashboard with charts",
"recipient_email": "[email protected]",
"tech_stack": ["python", "fastapi", "html"],
"features": ["responsive_design", "charts", "real_time_updates"]
}'
curl -X POST http://localhost:9000/webhook/generate \
-H "Content-Type: application/json" \
-d '{
"app_type": "api",
"description": "REST API with authentication",
"recipient_email": "[email protected]",
"tech_stack": ["python", "fastapi"],
"features": ["jwt_auth", "database", "swagger_docs"]
}'
curl -H "Authorization: Bearer your_token" \
http://localhost:8000/status/{request_id}
Service | Port | Description |
---|---|---|
LLM Generator | 8000 | Main API for code generation |
SMTP Service | 5000 | Email sending service |
Webhook Receiver | 9000 | Webhook handling |
MailHog UI | 8025 | Email testing interface |
Ollama | 11434 | Local LLM service |
Prometheus | 9090 | Metrics collection |
Grafana | 3000 | Monitoring dashboard |
Redis | 6379 | Caching and job queue |
Generated applications are sent as ZIP attachments containing:
- Source code files (main.py, templates, static files)
- Dockerfile for containerization
- requirements.txt or package.json
- README.md with setup instructions
- metadata.json with generation details
# Run system tests
chmod +x scripts/test-system.sh
./scripts/test-system.sh
# Send test email
curl -X POST http://localhost:5000/send-test \
-H "Content-Type: application/json" \
-d '{"recipient": "[email protected]"}'
# View sent emails
open http://localhost:8025
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin123)
- Application Logs:
docker-compose logs -f service_name
- Extend
LLMProvider
class inllm-generator/llm_providers.py
- Add provider configuration in
main.py
- Update environment variables
- Add Jinja2 templates in
templates/
directory - Reference in
code_generator.py
- Customize prompts for specific application types
- Modify
email_packager.py
for custom email formats - Add HTML templates in
email-templates/
- Customize attachment handling
- API Authentication: Use strong API tokens
- SMTP Security: Use app passwords, not account passwords
- Email Validation: Recipients are validated before sending
- Code Review: Generated code should be reviewed before production use
- Rate Limiting: Implement rate limiting for production deployments
-
Ollama model download fails
docker-compose exec ollama ollama pull codellama:7b-instruct
-
SMTP authentication errors
- Check app password configuration
- Verify SMTP settings in .env
-
Memory issues with local LLMs
- Reduce model size or increase Docker memory limits
- Switch to cloud LLM providers
-
Email delivery issues
- Check MailHog for local testing: http://localhost:8025
- Verify SMTP configuration and credentials
# View all logs
docker-compose logs
# Specific service logs
docker-compose logs -f llm-generator
docker-compose logs -f smtp-service
Apache License - see LICENSE file for details.
- Fork the repository
- Create feature branch
- Add tests for new features
- Submit pull request
- Create GitHub issues for bugs and feature requests
- Check logs for troubleshooting
- Review configuration in .env file
Email jako protokΓ³Ε dystrybucji oprogramowania generowanego przez AI to rewolucyjna koncepcja ΕΔ czΔ ca moΕΌliwoΕci Large Language Models (LLM) z tradycyjnΔ infrastrukturΔ email. Idea polega na automatycznej dystrybucji dynamicznie generowanego kodu/aplikacji bezpoΕrednio przez SMTP, wykorzystujΔ c email jako medium transportu i metadanych.
- LLM Generator: AI model generujΔ cy kod na ΕΌΔ danie
- SMTP Server: Serwer email jako kanaΕ dystrybucji
- Webhook Interface: API do triggering generacji i wysyΕki
- Metadata Packaging: Automatyczne tworzenie samorozpakowujΔ cych siΔ pakietΓ³w
- Email Parsing: Automatyczne wyodrΔbnianie i wykonywanie zaΕΔ cznikΓ³w
Infrastruktura email jest uniwersalna:
- KaΕΌda organizacja ma juΕΌ dziaΕajΔ cy system email
- Brak potrzeby dodatkowych narzΔdzi deployment
- Naturalna kompatybilnoΕΔ z istniejΔ cymi workflow
AI-driven personalizacja:
- Kod generowany on-demand na podstawie specyfikacji
- Dynamiczne dostosowanie do Εrodowiska uΕΌytkownika
- Automatyczne uwzglΔdnienie dependencies i konfiguracji
Asynchroniczna dystrybucja:
- Brak blocking operations podczas generacji
- Kolejkowanie requestΓ³w w SMTP queue
- Scalability przez distributed email servers
Audit trail i wersjonowanie:
- Naturalny system logowania przez email history
- MoΕΌliwoΕΔ rollback przez resend starszych wersji
- Compliance z corporate email policies
Zero-dependency deployment:
- Brak potrzeby CI/CD pipeline'Γ³w
- Nie wymaga VPN ani internal network access
- DziaΕa przez firewall restrictions
Ograniczenia bezpieczeΕstwa:
- Email nie jest medium zaprojektowanym dla executables
- TrudnoΕΔ w code signing i verification
- PodatnoΕΔ na email interception
Problemy ze skalowalnoΕciΔ :
- Email attachment size limits (zazwyczaj 25-50MB)
- SMTP delivery delays i retry mechanisms
- Brak real-time feedback o deployment status
ZΕoΕΌonoΕΔ debugowania:
- TrudnoΕΔ w Εledzeniu bΕΔdΓ³w deployment
- Ograniczone logging capabilities
- Problemy z dependency resolution
Compliance i audit issues:
- Potencjalne konflikty z corporate IT policies
- TrudnoΕci w change management tracking
- Legal issues z automated code distribution
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β User Request βββββΆβ LLM Generator βββββΆβ SMTP Gateway β
β (Webhook/API) β β (Code Gen) β β (Email Send) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β Metadata β β User Inbox β
β Packaging β β (Receive) β
βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β EML Creation β β Auto Extract β
β (Self-Extract) β β (Execute) β
βββββββββββββββββββ βββββββββββββββββββ
- Request initiation: Webhook lub API call z parametrami aplikacji
- LLM Processing: AI generuje kod bazujΔ c na input parameters
- Metadata enrichment: Automatyczne dodawanie dependencies, configs
- EML Packaging: Tworzenie samorozpakowujΔ cego siΔ email archive
- SMTP Delivery: WysyΕka przez konfigurowany SMTP server
- Client Reception: Otrzymanie i automatyczne przetworzenie
- Execution: Uruchomienie aplikacji w target environment
Inbound webhooks (triggering generacji):
{
"app_type": "dashboard",
"requirements": ["Python", "FastAPI", "Docker"],
"recipient": "[email protected]",
"parameters": {
"database": "PostgreSQL",
"auth": "OAuth2",
"deployment": "containerized"
}
}
Outbound webhooks (status notifications):
{
"status": "email_sent",
"request_id": "req_12345",
"recipient": "[email protected]",
"timestamp": "2025-06-19T10:30:00Z",
"tracking_id": "email_67890"
}
- Automatyczne generowanie admin dashboards
- Custom reporting applications
- One-off automation scripts dla specific tasks
- Personalized demos dla sales presentations
- Custom integrations dla client environments
- Proof-of-concept applications
- Hotfix distribution gdy CI/CD is down
- Disaster recovery tools
- Quick patches dla critical systems
- Personalized learning environments
- Custom exercise generators
- Development environment setup
Model selection criteria:
- Code generation capabilities (Python, JavaScript, Docker)
- Support for structured output (JSON metadata)
- Rate limiting i cost considerations
- Local vs. cloud deployment options
Prompt engineering patterns:
GENERATION_PROMPT = """
Generate a complete {app_type} application with the following requirements:
- Technology stack: {tech_stack}
- Deployment target: {deployment_target}
- Features: {features}
Include:
1. Complete source code
2. Dockerfile dla containerization
3. Deployment instructions
4. Configuration files
5. Basic tests
Output as JSON with file paths and contents.
"""
Authentication i security:
- OAuth2 dla Gmail/Office365 integration
- SMTP-AUTH dla dedicated servers
- TLS encryption dla all communications
- Rate limiting dla abuse prevention
Delivery optimization:
- Queue management dla bulk operations
- Retry logic dla failed deliveries
- Monitoring i alerting dla SMTP health
- Load balancing across multiple SMTP servers
MIME structure optimization:
multipart/mixed
βββ text/plain (human readable summary)
βββ text/html (rich formatted instructions)
βββ application/octet-stream (source_code.zip)
βββ application/json (metadata.json)
βββ text/x-dockerfile (Dockerfile)
Metadata standardization:
{
"version": "1.0",
"generated_at": "2025-06-19T10:30:00Z",
"llm_model": "gpt-4",
"request_id": "req_12345",
"app_metadata": {
"name": "Custom Dashboard",
"type": "web_application",
"runtime": "python:3.11",
"dependencies": ["fastapi", "uvicorn", "pydantic"]
},
"deployment": {
"method": "docker",
"port": 8080,
"environment_vars": ["DATABASE_URL", "SECRET_KEY"]
},
"execution_instructions": [
"docker build -t custom-dashboard .",
"docker run -p 8080:8080 custom-dashboard"
]
}
Aspekt | Email Distribution | GitHub Actions | Docker Registry | Package Managers |
---|---|---|---|---|
Setup Complexity | Niski | Εredni | Εredni | Wysoki |
Infrastructure Deps | Email only | Git + CI/CD | Registry server | Package repos |
Real-time Feedback | Ograniczony | Excellent | Good | Good |
Security | Podstawowy | Strong | Strong | Excellent |
Versioning | Email history | Git-based | Tag-based | Semantic versioning |
Rollback | Manual resend | Automated | Tag switching | Version downgrade |
Enterprise Integration | Native | Good | Good | Excellent |
Debugging | Limited | Excellent | Good | Good |
System skΕada siΔ z trzech gΕΓ³wnych komponentΓ³w:
- REST API dla request handling
- LLM integration (OpenAI/Anthropic/Local)
- Template management system
- Code validation i testing
- SMTP server integration
- Email template generation
- Attachment handling
- Delivery tracking
- Email parsing utilities
- Automatic extraction scripts
- Execution wrappers
- Status reporting hooks
Email-based AI software distribution to interesujΔ ca koncepcja dla specific use cases, ale nie zastΔ pi tradycyjnych methods dla production systems.
Zalecane zastosowania:
- Prototyping i rapid development
- Internal tool distribution w maΕych teams
- Emergency deployment scenarios
- Educational i training environments
Nie zalecane dla:
- Production deployment systems
- Security-critical applications
- High-frequency update cycles
- Applications wymagajΔ ce complex dependency management
Kluczowe success factors:
- Strong email infrastructure
- Proper security protocols
- Clear governance policies
- Comprehensive monitoring
- User education i training
System moΕΌe byΔ valuable addition do developer toolkit, ale should complement, not replace, established deployment methodologies.