-
Notifications
You must be signed in to change notification settings - Fork 4
StreamServer API Reference
This document provides the complete API specification for PyTrickle's StreamServer, including all endpoints, request/response models, and configuration options.
The StreamServer provides a comprehensive REST API for managing streaming operations. All endpoints use JSON for request/response bodies unless otherwise specified.
Start a new streaming session with processing.
Request Body:
{
"subscribe_url": "http://localhost:3389/input",
"publish_url": "http://localhost:3389/output",
"control_url": "http://localhost:3389/control", // Optional
"events_url": "http://localhost:3389/events", // Optional
"data_url": "http://localhost:3389/data", // Optional for text/data publishing
"gateway_request_id": "unique-stream-id",
"params": { // Optional processing parameters
"width": 704,
"height": 384,
"max_framerate": 30, // Cannot be changed after start
"intensity": 0.7, // Custom parameters for your processor
"custom_param": "any_value"
}
}Response:
{
"status": "success",
"message": "Stream started successfully",
"request_id": "unique-stream-id"
}Error Response:
{
"status": "error",
"message": "Error starting stream: [detailed error message]"
}Stop the current streaming session.
Request Body: Empty
Response:
{
"status": "success",
"message": "Stream stopped successfully"
}Update processing parameters in real-time during streaming.
Request Body:
{
"intensity": 0.9,
"threshold": 0.5,
"custom_param": "new_value"
}Response:
{
"status": "success",
"message": "Parameters updated successfully"
}Note: max_framerate, width, and height cannot be updated during runtime and must be set when starting the stream.
Get detailed status information about the current streaming session.
Response:
{
"state": "PROCESSING", // LOADING, IDLE, PROCESSING, ERROR
"pipeline_ready": true,
"startup_complete": true,
"active_streams": 1,
"active_client": true,
"client_active": true,
"client_running": true,
"fps": { // Frame rate statistics
"input_fps": 29.8,
"output_fps": 29.7,
"processing_fps": 29.9
},
"current_params": { // Current stream configuration
"subscribe_url": "http://localhost:3389/input",
"publish_url": "http://localhost:3389/output",
"gateway_request_id": "unique-stream-id"
}
}Health check endpoint for container orchestration (Kubernetes/Docker).
Response:
{
"status": "OK" // LOADING, IDLE, OK, ERROR
}HTTP Status Codes:
-
200- Service is healthy -
500- Service has errors
Get service version information.
Response:
{
"pipeline": "byoc",
"model_id": "my-processor",
"version": "1.0.0"
}Get GPU compute capability information.
Response:
{
"pipeline": "byoc",
"model_id": "my-processor",
"gpu_info": {
"0": { // GPU index
"name": "NVIDIA GeForce RTX 4090",
"compute_capability": "8.9",
"memory_total": 24564, // MB
"memory_available": 22000 // MB
}
}
}Get real-time GPU utilization statistics.
Response:
{
"pipeline": "byoc",
"model_id": "my-processor",
"gpu_stats": {
"0": { // GPU index
"utilization": 85, // Percentage
"memory_used": 2564, // MB
"memory_total": 24564, // MB
"temperature": 72 // Celsius
}
}
}Alias for /api/stream/start for backward compatibility.
Request/Response: Same as /api/stream/start
The StreamServer constructor supports extensive configuration:
server = StreamServer(
frame_processor=my_processor,
port=8000, # Server port
pipeline="my-pipeline", # Pipeline identifier
capability_name="my-processor", # Capability name
version="1.0.0", # Version string
# Route configuration
route_prefix="/api", # API route prefix
enable_default_routes=True, # Enable built-in routes
custom_routes=[ # Add custom routes
{"method": "GET", "path": "/custom", "handler": my_handler}
],
# CORS and middleware
cors_config={"origins": "*"}, # CORS configuration
middleware=[my_middleware], # Custom middleware
# Static file serving
static_routes=[
{"prefix": "/static", "path": "./static"}
],
# Health monitoring
health_check_interval=5.0, # Health check interval (seconds)
# Timeouts
publisher_timeout=30.0, # Publisher timeout
subscriber_timeout=30.0, # Subscriber timeout
# Lifecycle hooks
on_startup=[startup_handler], # Startup callbacks
on_shutdown=[shutdown_handler] # Shutdown callbacks
)All endpoints return consistent error responses:
{
"status": "error",
"message": "Detailed error description"
}Common HTTP status codes:
-
200- Success -
400- Bad request (validation errors, missing parameters) -
500- Internal server error
The API automatically validates parameters using Pydantic models:
-
Type conversion:
width/heightautomatically converted to integers -
Range validation:
max_frameratelimited to 1-60 FPS -
Required fields:
subscribe_url,publish_url,gateway_request_idare required - Runtime restrictions: Some parameters cannot be changed during streaming
- Parameter updates: Change processing parameters without restarting streams
- Status monitoring: Get real-time FPS and performance metrics
- Health checks: Container orchestration support with proper status codes
-
Event publishing: Automatic monitoring events sent to
events_url -
Data publishing: Structured data output via
data_urlchannel
curl -X POST http://localhost:8000/api/stream/start \
-H "Content-Type: application/json" \
-d '{
"subscribe_url": "http://localhost:3389/input",
"publish_url": "http://localhost:3389/output",
"gateway_request_id": "demo_stream",
"params": {
"width": 704,
"height": 384,
"intensity": 0.7
}
}'curl -X POST http://localhost:8000/api/stream/params \
-H "Content-Type: application/json" \
-d '{
"intensity": 0.9,
"threshold": 0.5
}'curl http://localhost:8000/api/stream/statuscurl -X POST http://localhost:8000/api/stream/stopcurl http://localhost:8000/healthapiVersion: v1
kind: Pod
metadata:
name: pytrickle-processor
spec:
containers:
- name: processor
image: pytrickle:latest
ports:
- containerPort: 8000
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5version: '3.8'
services:
pytrickle:
build: .
ports:
- "8000:8000"
environment:
- PYTHONPATH=/app
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3upstream pytrickle_backend {
server pytrickle1:8000;
server pytrickle2:8000;
server pytrickle3:8000;
}
server {
listen 80;
location / {
proxy_pass http://pytrickle_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /health {
proxy_pass http://pytrickle_backend/health;
}
}400 Bad Request
- Check that all required fields are provided
- Verify parameter types (width/height must be integers)
- Ensure max_framerate is between 1-60
500 Internal Server Error
- Check server logs for detailed error messages
- Verify that the frame processor is properly initialized
- Ensure all URLs are accessible
Health Check Failures
- Verify the service is running and accessible
- Check that the frame processor has completed initialization
- Review startup logs for any initialization errors
For debugging purposes, you can add custom routes:
async def debug_handler(request):
return web.json_response({
"debug_info": "Custom debug information",
"timestamp": time.time()
})
server.add_route("GET", "/debug", debug_handler)- Health check interval: Balance between responsiveness and overhead
- Parameter validation: Automatic validation adds minimal overhead
- Status polling: Use appropriate intervals for status monitoring
- Error handling: Consistent error responses help with debugging
- CORS: Configure appropriately for your deployment environment