Skip to content

ai-flowx/statusx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ StatusX

AI-powered status monitoring for LLM model health and availability

License Python FastAPI

Features β€’ Installation β€’ Usage β€’ API Documentation β€’ Examples


✨ Features

  • πŸ”„ Concurrent Health Checks - Monitor multiple models simultaneously
  • 🎯 Multi-Model Support - Chat, Embedding, Image generation, and Reranker models
  • βš™οΈ Configurable Timeouts - Customize request timeouts per check
  • πŸ“Š Detailed Metrics - Real-time latency measurements and status reporting
  • 🎨 Beautiful Dashboard - Modern, responsive web UI with real-time updates
  • 🐳 Easy Deployment - Docker Compose ready with minimal configuration
  • πŸ“š Auto-Generated API Docs - Interactive Swagger UI and ReDoc documentation

πŸ“¦ Installation

Option 1: Local Installation

# Clone the repository
git clone https://github.com/ai-flowx/statusx.git
cd statusx

# Create a virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install .

Option 2: Docker Compose (Recommended)

# Clone the repository
git clone https://github.com/ai-flowx/statusx.git
cd statusx

# Create .env file from example
cp .env.example .env

# Edit .env with your API credentials
# Start the service
docker-compose up -d

βš™οΈ Configuration

Create a .env file in the project root with the following variables:

DRIVEX_URL=https://api.drivex.com/v1
DRIVEX_KEY=your_drivex_api_key
DRIVEX_TIMEOUT=30
Variable Description Default
DRIVEX_URL DriveX API base URL http://127.0.0.1
DRIVEX_KEY Your DriveX API key Required
DRIVEX_TIMEOUT Request timeout in seconds 30

πŸš€ Usage

Running with Docker Compose

# Start the service
docker-compose up -d

# View logs
docker-compose logs -f

# Stop the service
docker-compose down

The server will be available at http://localhost:8000

Running the server locally

# Start the development server
uvicorn src.statusx.main:app --reload

# Or use the run script
./script/run.sh

The server will be available at http://localhost:8000

🎨 Web Dashboard

Visit http://localhost:8000 to access the beautiful, real-time monitoring dashboard featuring:

  • βœ… System-wide health status
  • πŸ“Š Individual model status cards
  • ⚑ Response latency metrics
  • πŸ”„ Auto-refresh capability
  • 🎯 Color-coded status indicators

πŸ“š API Endpoints

Health & Status

Method Endpoint Description
GET / Root endpoint with service information
GET /api API endpoint listing
GET /api/health Service health check

Chat Models πŸ’¬

Method Endpoint Description
POST /api/models/health Check health of multiple chat models
GET /api/models/{model_id}/health Check health of a specific chat model

Embedding Models πŸ”

Method Endpoint Description
POST /api/embeddings/health Check health of multiple embedding models
GET /api/embeddings/{model_id}/health Check health of a specific embedding model

Image Models 🎨

Method Endpoint Description
POST /api/images/health Check health of multiple image models
GET /api/images/{model_id}/health Check health of a specific image model

Reranker Models ⚑

Method Endpoint Description
POST /api/rerankers/health Check health of multiple reranker models
GET /api/rerankers/{model_id}/health Check health of a specific reranker model

πŸ’‘ Example Requests

Check Chat Models

curl -X POST http://localhost:8000/api/models/health
Response Example
{
  "healthy": true,
  "models": [
    {
      "model": "gpt-4o",
      "status": "healthy",
      "latency_ms": 234.56,
      "error": null
    },
    {
      "model": "gpt-3.5-turbo",
      "status": "healthy",
      "latency_ms": 156.78,
      "error": null
    }
  ],
  "timestamp": 1704326400.0
}

Check Embedding Models

curl -X POST http://localhost:8000/api/embeddings/health

Check Image Models

curl -X POST http://localhost:8000/api/images/health

Check Reranker Models

curl -X POST http://localhost:8000/api/rerankers/health

Check Specific Model

# Check a specific chat model
curl http://localhost:8000/api/models/gpt-4o/health

# Check a specific reranker model
curl http://localhost:8000/api/rerankers/rerank-1/health

Custom Timeout

curl -X POST http://localhost:8000/api/models/health \
  -H "Content-Type: application/json" \
  -d '{"timeout": 15}'

πŸ“– Documentation

StatusX provides automatically generated interactive API documentation:

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

Built with:


Made with ❀️ by the StatusX Team

Report Bug Β· Request Feature

About

ai status

Resources

License

Stars

Watchers

Forks

Packages

No packages published