An AI-powered content generation tool, focused on reliability, monitoring, and evaluation capabilities. This system demonstrates a complete workflow from content generation to quality assurance using multiple AI providers.
SPOT was created by Chris Minnick as a demo project for his book, "A Developer's Guide to Integrating Generative AI into Applications" (Wiley Publishing, 2026, ISBN: 9781394373130).
- Node.js Version (this repository) - Full-featured implementation with comprehensive evaluation framework
- Python Version - spot-python - Python implementation with the same core functionality
- Multi-Provider AI Support - OpenAI, Anthropic, Gemini with automatic failover
- Production-Ready Architecture - Error handling, circuit breakers, health monitoring
- Comprehensive Evaluation - Golden set testing with 9 test categories
- Template Management - Versioned JSON templates with A/B testing
- Style Governance - Brand voice enforcement and content validation
- Offline Style Linting - Check content compliance without API calls
- Observability - Structured logging, metrics, and monitoring
- CLI Interface - Complete command-line interface for all operations
- Web API - RESTful API server for integration with web applications
- Node.js 18+
- At least one AI provider API key (OpenAI, Anthropic, or Gemini)
# Clone and navigate to project
git clone https://github.com/chrisminnick/spot-toolkit.git
cd spot-toolkit
# Install dependencies
npm install
# Create environment configuration
npm run setup
# Edit .env with your API keys
# Required: Set at least one provider API key
PROVIDER=openai
OPENAI_API_KEY=your_api_key_here# Check system health and validate templates
npm test
# Or run individual checks
npm run health
npm run validate# Start interactive mode (recommended)
npm start
# Generate content using a template
npm run generate [email protected] my-content/build-ai-applications.json output.json
# Use task-specific commands
npm run scaffold -- --asset_type "blog post" --topic "AI applications" --audience "developers" --tone "technical" --word_count 800# Run all evaluations
npm run eval:all
# Run specific operation evaluations
npm run eval:scaffold
npm run eval:expand
npm run eval:rewrite
npm run eval:summarize
npm run eval:repurpose
# Or run comprehensive evaluation
npm run evalSPOT includes a RESTful API server for integration with web applications and services.
# Start the API server (http://localhost:8000)
npm run api
# Development mode with auto-reload
npm run api:dev
# Production mode
npm run api:prodThe API includes a beautiful web interface with:
- 📊 Real-time Health Dashboard - Monitor system health with auto-refresh
- 📚 Interactive API Documentation - Swagger UI at
/docs - 🎯 Quick Access Links - Direct access to all API endpoints
- 🎨 Modern UI Design - Clean, responsive interface
Access the web interface: http://localhost:8000
GET /health- System health checkGET /api/v1/info- API information and capabilitiesGET /api/v1/templates- List available templatesPOST /api/v1/scaffold- Create content scaffoldsPOST /api/v1/expand- Expand content sectionsPOST /api/v1/rewrite- Rewrite content for different audiencesPOST /api/v1/summarize- Summarize content with citationsPOST /api/v1/repurpose- Repurpose content for multiple channelsPOST /api/v1/style/check- Check content style compliance
# Create a content scaffold
curl -X POST http://localhost:8000/api/v1/scaffold \
-H "Content-Type: application/json" \
-d '{
"asset_type": "blog post",
"topic": "AI applications",
"audience": "developers",
"tone": "technical",
"word_count": 600
}'// JavaScript example
const response = await fetch('http://localhost:8000/api/v1/scaffold', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
asset_type: 'blog post',
topic: 'AI applications',
audience: 'developers',
tone: 'technical',
}),
});
const { scaffold } = await response.json();Run the included API client examples:
# Basic API examples
node examples/api-client.js examples
# Complete content workflow
node examples/api-client.js workflow
# Health check
node examples/api-client.js healthThe API server includes Swagger UI for interactive documentation:
- � URL: http://localhost:8000/docs
- ✨ Features:
- Try endpoints directly in your browser
- View request/response schemas
- Generate code examples
- Real-time testing without external tools
📖 Static Documentation: docs/API.md
app.js- Main application entry point with integrated CLIsrc/SPOT.js- Core content generation orchestratorsrc/api/server.js- RESTful API serversrc/utils/- Production utilities (error handling, monitoring, etc.)prompts/- Versioned JSON prompt templatesgolden_set/- Comprehensive test data across 9 categoriesconfigs/- Provider and channel configurationsstyle/- Style pack governance rules
- Error Handling - Custom error types, retry logic, circuit breakers
- Configuration Management - Environment-aware config with validation
- Observability - Structured logging with multiple output formats
- Monitoring - Health checks, metrics collection, system monitoring
- Provider Management - Multi-provider support with intelligent failover
- Template Management - A/B testing, caching, version management
Key configuration options in .env:
# Core Settings
NODE_ENV=development # development/production
LOG_LEVEL=info # debug/info/warn/error
PROVIDER=openai # openai/anthropic/gemini/mock
# AI Provider Keys (set at least one)
OPENAI_API_KEY=your_key
ANTHROPIC_API_KEY=your_key
GEMINI_API_KEY=your_key
# API Server (optional)
PORT=8000
CORS_ORIGINS=http://localhost:3000
RATE_LIMIT_MAX=100
# Performance & Reliability
CIRCUIT_BREAKER_THRESHOLD=5
HEALTH_CHECK_INTERVAL=60000
METRICS_ENABLED=trueSPOT supports multiple AI providers out of the box:
- OpenAI GPT (
openai) - SetOPENAI_API_KEY - Anthropic Claude (
anthropic) - SetANTHROPIC_API_KEY - Google Gemini (
gemini) - SetGEMINI_API_KEY - Mock Provider (
mock) - No API key needed, returns sample responses
Switch providers by setting the PROVIDER environment variable or modifying configs/providers.json.
Default settings are in configs/providers.json:
{
"defaultProvider": "openai",
"providers": {
"openai": { "model": "gpt-4", "maxTokens": 2000, "temperature": 0.7 },
"anthropic": {
"model": "claude-3-sonnet-20240229",
"maxTokens": 2000,
"temperature": 0.7
},
"gemini": {
"model": "gemini-1.5-pro",
"maxTokens": 2000,
"temperature": 0.7
}
}
}npm start # Start interactive mode (node app.js)
npm run dev # Start in development mode
npm run generate # Generate content using templates
npm run health # Check system health
npm run validate # Validate templates and configuration
npm test # Run health check + validationnpm run api # Start API server
npm run api:dev # Start API in development mode
npm run api:prod # Start API in production mode
npm run api:examples # Run API client examples
npm run api:workflow # Run API workflow demo# Task-specific content generation
npm run scaffold # Brief → Scaffold (JSON structure)
npm run expand # Section → Expanded prose
npm run rewrite # Rewrite/localize content
npm run summarize # Summarize with citations
npm run repurpose # Repurpose to multiple channelsnpm run eval # Run basic evaluation
npm run eval:scaffold # Evaluate scaffolding operation
npm run eval:expand # Evaluate expand operation
npm run eval:rewrite # Evaluate rewrite operation
npm run eval:summarize # Evaluate summarize operation
npm run eval:repurpose # Evaluate repurpose operation
npm run eval:all # Run all evaluation operationsnpm run setup # Copy .env template and prompt for configuration
npm run clean # Remove temporary files and logs
npm run lint # Check content style compliance (offline)
npm run lint:content # Explicit content style lintingSPOT includes an offline style linter that validates content against your style pack rules:
# Lint a specific content file
npm run lint my_content/article.txt
# The linter checks:
# ✅ Reading level (Flesch-Kincaid grade)
# ✅ Banned terms (must_avoid list)
# ✅ Required terms (must_use list)
# ✅ Reading level compliance
# Example output:
# Style Lint Report for: article.txt
# Reading Level: 8.2 (Target: Grade 8-10)
# Reading Level OK: ✅
# ✅ No banned terms found
# ✅ All required terms present# Setup and validate
npm run setup
npm test
# Generate content interactively
npm start
# Task-specific generation with parameters
npm run scaffold -- --asset_type "landing page" --topic "Privacy-first analytics" --audience "startup founders" --tone "confident" --word_count 600
npm run expand -- --section_json '{"heading":"Why Privacy Matters","bullets":["Build trust","Comply with regulations"]}'
npm run rewrite -- --text "Original content..." --audience "CFOs" --tone "formal" --grade_level 9 --words 140 --locale "en-GB"
npm run summarize -- --file golden_set/transcripts/build-ai-applications-1.txt --mode executive
npm run repurpose -- --file golden_set/repurposing/example_article.md# Interactive mode
node app.js
# Direct commands
node app.js health
node app.js generate [email protected] input.json output.json
node app.js evaluate
# Task-specific CLI
node src/cli.js scaffold --asset_type "blog post" --topic "AI applications"
node src/cli.js expand --section_json '{"heading":"Title","bullets":["Point 1"]}'The evaluation system allows you to test and benchmark your prompts and AI provider performance across different scenarios.
# Run all evaluation operations
npm run eval:all
# Run specific operation evaluations
npm run eval:scaffold # Test brief → scaffold generation
npm run eval:expand # Test section expansion
npm run eval:rewrite # Test content rewriting
npm run eval:summarize # Test transcript summarization
npm run eval:repurpose # Test content repurposing
# Basic evaluation
npm run eval# Evaluate all brief files (default)
node src/eval/runEvaluations.js
# Evaluate specific files
node src/eval/runEvaluations.js brief1.json brief2.json
# Evaluate specific directory and operation
node src/eval/runEvaluations.js --directory golden_set/briefs --operation scaffold
# Get help
node src/eval/runEvaluations.js --helpThe evaluation harness computes several key metrics:
- Style violations per 1,000 words - Checks adherence to your style pack rules
- Reading level band compliance - Ensures content matches target audience
- API latency - Response times for performance benchmarking
- Quality metrics - Across different prompt templates and providers
{
"count": 2,
"latency": {
"p50": 125,
"p95": 180
},
"samples": [
{
"brief": "brief1.json",
"latencyMs": 125,
"style": {
"violations": 0,
"readingLevel": "appropriate",
"mustUse": ["privacy", "startup"],
"mustAvoid": []
}
}
]
}The evaluation system uses a comprehensive test suite in the golden_set/ directory organized by test purpose:
Core Test Categories:
golden_set/briefs/- Sample input briefs for testing scaffold generation (includes easy, medium, hard, and extreme complexity levels)golden_set/transcripts/- Sample transcripts for summarization testinggolden_set/repurposing/- Sample content for repurposing evaluation
Quality Assurance Categories:
golden_set/edge_cases/- Boundary conditions and unusual inputs (empty fields, special characters, extreme complexity)golden_set/style_compliance/- Content designed to test style pack rule adherence (must_use/must_avoid terms, terminology)golden_set/performance/- Large files and stress test scenariosgolden_set/provider_comparison/- Standardized tests for comparing AI provider outputsgolden_set/domain_specific/- Specialized content requiring domain expertise (technical, legal, medical)golden_set/expected_outputs/- Reference outputs for validation and regression testing
File Naming Convention:
Files follow the pattern: {type}_{difficulty}_{description}.{ext}
brief_easy_react_tutorial.json- Simple tutorial brieftranscript_hard_technical_meeting.txt- Complex technical meeting transcriptarticle_medium_remote_teams.md- Medium complexity repurposing content
You can compare different AI providers by switching the PROVIDER environment variable and running the same evaluation:
# Test with OpenAI
PROVIDER=openai npm run eval:all
# Test with Claude
PROVIDER=anthropic npm run eval:all
# Test with mock provider (no API costs)
PROVIDER=mock npm run eval:all# Provider selection
PROVIDER=openai # openai, anthropic, gemini, or mock
# API Keys (only needed for respective providers)
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GEMINI_API_KEY=your_gemini_key
# Optional: Override default models
OPENAI_MODEL=gpt-4-turbo
ANTHROPIC_MODEL=claude-3-opus-20240229
GEMINI_MODEL=gemini-1.5-pro-latest
# System settings
NODE_ENV=development # development/production
LOG_LEVEL=info # debug/info/warn/error
LOG_FORMAT=json # json/text
# Performance tuning
CIRCUIT_BREAKER_THRESHOLD=5
HEALTH_CHECK_INTERVAL=60000
METRICS_ENABLED=truespot-toolkit/
├── app.js # Main CLI application
├── src/
│ ├── SPOT.js # Core orchestrator
│ ├── cli.js # Task-specific CLI
│ ├── api/ # Web API server
│ │ └── server.js # Express.js API server
│ ├── providers/ # AI provider implementations
│ ├── utils/ # Production utilities
│ └── eval/ # Evaluation system
├── prompts/ # Versioned JSON templates
├── golden_set/ # Test data and evaluation
├── configs/ # Configuration files
├── style/ # Style governance
├── docs/ # Additional documentation
├── examples/ # API client examples
└── scripts/ # Automation scripts
1. "Provider not found" error:
# Check your .env file has the correct PROVIDER value
echo $PROVIDER
# Should be one of: openai, anthropic, gemini, mock2. "API key not found" error:
# Ensure you've set the appropriate API key
echo $OPENAI_API_KEY # or $ANTHROPIC_API_KEY, $GEMINI_API_KEY3. "Template not found" error:
# Validate all templates
npm run validate4. npm script parameters:
# Use -- to pass parameters to npm run scripts
npm run scaffold -- --asset_type "blog post" --topic "AI"
# Not: npm run scaffold --asset_type "blog post" --topic "AI"5. API server not starting:
# Check if port is available
npm run api:dev
# Or set a different port: PORT=3001 npm run api- Run
npm startfor interactive mode (recommended for beginners) - Run
npm run validateto check templates and configuration - Run
npm run healthto verify system status - Check the docs/ directory for detailed guides
- Review the golden_set/README.md for evaluation help
- Minimal dependencies - Only uses
dotenvfor environment variable loading, plus Express.js ecosystem for API - Provider abstraction - Easy to add new AI providers or swap between existing ones
- Graceful fallback - Automatically falls back to mock provider if API keys are missing
- Versioned prompts - Prompts are plain JSON with
{placeholders}compatible with most prompt-management tools - Production ready patterns - Includes error handling, configuration management, and evaluation framework
- Full API support - Complete REST API for integration with web applications
- Create
src/providers/newProvider.jsextending the baseProviderclass - Add provider configuration to
configs/providers.json - Update
src/utils/providerManager.jswith the new provider case - Add API key handling in the provider factory's
getApiKey()method
- Follow the existing code structure and patterns
- Add appropriate tests in the
golden_set/directory - Validate changes with
npm run validate - Run comprehensive evaluation:
npm run eval:all - Test with
npm testbefore submitting
© 2025 Chris Minnick. All rights reserved.
This software and associated documentation files (the "Software") are protected by copyright and other intellectual property laws. The Software is licensed, not sold.
This project is licensed under the MIT License - see the LICENSE file for details.
