A simple HTTP server that processes text through Large Language Models (LLM) to detect emergency situations. Currently supports LM Studio and is prepared for Ollama integration.
- HTTP server implementation for emergency text analysis
- Support for LM Studio integration
- JSON-based API interface
- Configurable port and model settings
- Platform override capabilities
- Python 3.x
- LM Studio installed and running locally
- Required Python packages:
http.serverjson- Custom
TBED_LMstudiomodule
- Clone the repository:
git clone [repository-url]- Install required dependencies:
pip install -r requirements.txtDefault settings in the code:
server_port = 9111
platform = "LMstudio"
port = 1234
url = 'http://localhost:1234/v1/chat/completions'
model = "llama3"- Start the server:
python server.py- Send POST requests to
http://localhost:9111with JSON payload:
Basic request format:
{
"message": "your text here"
}Advanced request with overrides:
{
"message": "your text here",
"platform_override": "LMstudio",
"port_override": 1234
}POST /
| Parameter | Type | Description |
|---|---|---|
| message | string | Text to analyze for emergency situations |
| platform_override | string | (Optional) Override default platform |
| port_override | integer | (Optional) Override default port |
Success Response:
{
"status": "success",
"message": "Data received successfully",
"emergency": "response data"
}Error Response:
{
"status": "error",
"message": "error description"
}The server handles various error scenarios:
- Invalid JSON format (400)
- Server processing errors (500)
- Platform connection issues
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a new Pull Request
[Your License Here]
[Your Name]
- LM Studio team
- Contributors to the project