A professional FastAPI application for digit recognition using PyTorch CNN, HTMX, and modern web technologies.
# Complete installation and setup
./install.sh
# or
make install-app- Features
- Architecture
- Installation
- Usage
- Development
- Production
- Docker
- Testing
- API Documentation
- Performance
- Security
- Monitoring
- Contributing
- License
- ✅ PyTorch CNN Model - High-accuracy digit recognition
- ✅ FastAPI Backend - Modern, fast REST API
- ✅ HTMX Frontend - Dynamic web interface
- ✅ Real-time Prediction - Instant digit recognition
- ✅ Batch Processing - Multiple image support
- ✅ File Upload - Direct image upload support
- ✅ Professional Architecture - Clean, modular code structure
- ✅ Type Safety - Full type hints and validation
- ✅ Error Handling - Comprehensive error management
- ✅ Logging - Structured logging with different levels
- ✅ Caching - Static file caching and optimization
- ✅ Security Headers - XSS, CSRF protection
- ✅ Compression - GZip middleware for performance
- ✅ uvloop - High-performance event loop
- ✅ Gunicorn + Uvicorn - Production-ready server
- ✅ Worker Management - Dynamic worker scaling
- ✅ Connection Pooling - Optimized connection handling
- ✅ Memory Optimization - Low memory footprint
fastapi_htmx_paint/
├── app/
│ ├── models/ # ML models and predictors
│ ├── routes/ # API endpoints
│ ├── static/ # Static files (CSS, JS, images)
│ ├── templates/ # HTML templates
│ └── asgi.py # FastAPI application
├── scripts/ # Utility scripts
├── tests/ # Test suite
├── model/ # Trained model files
├── data/ # Dataset files
├── main.py # Application entry point
├── install.sh # Installation script
├── run.sh # Smart runner script
├── docker-entrypoint.sh # Docker entrypoint
└── requirements.txt # Dependencies
- Python 3.8+
- pip or uv
- Docker (optional)
# Clone repository
git clone <repository-url>
cd fastapi_htmx_paint
# Run installation script
./install.sh# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install gunicorn
# Train model
python scripts/train_cnn.py
# Run tests
python -m pytest tests/ -v# Start with auto-reload
make run-dev
# or
ENVIRONMENT=development python main.pyFeatures:
- ✅ Uvicorn + Auto-reload - Fast development cycle
- ✅ Hot Reload - Code changes reflect immediately
- ✅ Debug Mode - Detailed error messages
- ✅ Access Logs - Request/response logging
# Start with Gunicorn + Uvicorn worker
make run-prod
# or
make start-prod
# or
./scripts/start_production.shFeatures:
- ✅ Gunicorn + Uvicorn Worker - Optimal performance
- ✅ Automatic Workers - Based on CPU cores (CPU * 2 + 1)
- ✅ Worker Management - Crash recovery and load balancing
- ✅ Connection Pooling - 1000 connections
- ✅ Request Limiting - Prevents memory leaks
- ✅ Preload - Fast startup
- ✅ Production Logging - Access and error logs
- ✅ Security Limits - Request size and field limits
- ✅ Graceful Shutdown - Safe stopping
# With Docker (provides option to choose development or production)
docker run -p 8000:8000 digit-recognizerFeatures:
- ✅ Interactive Mode Selection - Choose Development or Production
- ✅ Development Mode - Uvicorn + auto-reload
- ✅ Production Mode - Gunicorn + Uvicorn worker
- ✅ Environment Isolation - Complete isolation
- ✅ Easy Deployment - Start with one command
# Run with pytest
make test
# or
python -m pytest tests/ -v
# Run with coverage
make test-cov- ✅ API Tests - Endpoint functionality
- ✅ Model Tests - ML model predictions
- ✅ Integration Tests - Full workflow testing
- ✅ Performance Tests - Load testing
# Lint and format
make lint
make format
# Type checking
make type-check
# All quality checks
make qualityhttp://localhost:8000
GET /api/v1/healthGET /api/v1/model/infoPOST /api/v1/predict
Content-Type: application/json
{
"image": "base64_encoded_image"
}POST /api/v1/predict/batch
Content-Type: application/json
{
"images": ["base64_encoded_image1", "base64_encoded_image2"]
}POST /api/v1/predict/file
Content-Type: multipart/form-data
file: image_file- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- Model Accuracy: ~99% on MNIST test set
- Inference Time: <50ms per prediction
- Memory Usage: <500MB total
- Concurrent Requests: 1000+ with Gunicorn
- ✅ Model Optimization - CPU-optimized PyTorch
- ✅ Image Preprocessing - Efficient resizing and normalization
- ✅ Caching - Static file and response caching
- ✅ Compression - GZip compression for responses
- ✅ Connection Reuse - Persistent connections
- ✅ Input Validation - Comprehensive request validation
- ✅ File Upload Security - Safe file handling
- ✅ CORS Protection - Cross-origin request control
- ✅ Security Headers - XSS, CSRF protection
- ✅ Rate Limiting - Request rate control
- ✅ Error Sanitization - Safe error messages
- ✅ Environment Variables - Secure configuration
- ✅ Dependency Scanning - Regular security updates
- ✅ Code Review - Security-focused development
- ✅ Testing - Security test coverage
- ✅ Structured Logging - JSON format logs
- ✅ Log Levels - DEBUG, INFO, WARNING, ERROR
- ✅ Request Logging - Access and error logs
- ✅ Performance Logging - Response time tracking
- ✅ Health Checks - Application health monitoring
- ✅ Performance Metrics - Response time, throughput
- ✅ Error Tracking - Error rate and types
- ✅ Resource Usage - CPU, memory monitoring
# Installation
make install-app # Run installation script
# Development
make run # Start application
make run-dev # Development mode
make run-prod # Production mode
make start-prod # Production startup script
# Testing
make test # Run tests
make test-cov # Run tests with coverage
make lint # Run linter
make format # Format code
make type-check # Type checking
make quality # All quality checks
# Docker
make docker-build # Build Docker image
make docker-run # Run Docker container
# Utilities
make clean # Clean cache files
make help # Show help- Setup: Run
./install.sh - Development: Use
make run-dev - Testing: Run
make test - Quality: Check with
make quality - Production: Deploy with
make run-prod
# Build Docker image
make docker-build
# or
docker build -t digit-recognizer .# Run with interactive mode selection
docker run -p 8000:8000 digit-recognizer
# Run in specific mode
docker run -p 8000:8000 -e ENVIRONMENT=production digit-recognizer- ✅ Multi-stage Build - Optimized image size
- ✅ Health Checks - Container health monitoring
- ✅ Environment Variables - Configurable settings
- ✅ Volume Mounting - Persistent data storage
- ✅ Network Configuration - Port mapping
- Fork the repository
- Create feature branch:
git checkout -b feature-name - Make changes and test:
make test - Check quality:
make quality - Commit changes:
git commit -m 'Add feature' - Push branch:
git push origin feature-name - Create Pull Request
- ✅ Type Hints - Full type annotation
- ✅ Docstrings - Comprehensive documentation
- ✅ Error Handling - Proper exception handling
- ✅ Testing - High test coverage
- ✅ Formatting - Consistent code style
This project is licensed under the MIT License - see the LICENSE file for details.
- PyTorch - Deep learning framework
- FastAPI - Modern web framework
- HTMX - Dynamic web interface
- MNIST - Digit recognition dataset
- Gunicorn - Production WSGI server
- Uvicorn - Lightning-fast ASGI server
Made with ❤️ for digit recognition