This project demonstrates the deployment of a fully self-hosted Large Language Model (LLM) environment using Ollama and Open WebUI on Linux.
The system includes:
- Local LLM deployment
- Custom model creation using Ollama Modelfile
- API configuration and testing
- WebUI integration
- Troubleshooting of port and service conflicts
- Fully offline AI interaction without cloud APIs
The custom model Elion was built on top of a base model and configured with specialized system behavior and generation parameters.
- Deploy Ollama on Linux
- Run and manage local LLM models
- Create a custom AI model (Elion)
- Integrate Ollama API with Open WebUI
- Debug and resolve API connection issues
- Successfully interact with a self-hosted AI assistant
- Linux (Ubuntu)
- Ollama
- Open WebUI
- REST API
- Curl
- Git & GitHub
curl -fsSL https://ollama.com/install.sh | shVerify installation:
ollama --versionollama serveServer runs on:
http://127.0.0.1:11434
ollama pull qwen2.5:3b-instruct-q4_K_Mnano ModelfileExample Modelfile:
FROM qwen2.5:3b-instruct-q4_K_M
SYSTEM You are Elion, a professional AI assistant specialized in cybersecurity, Linux troubleshooting, and system diagnostics.
PARAMETER temperature 0.7
PARAMETER num_ctx 4096
PARAMETER top_p 0.9
ollama create elion -f Modelfileollama listExpected output:
elion:latest
qwen2.5:3b-instruct-q4_K_M
curl http://127.0.0.1:11434/api/generate -d '{
"model": "elion",
"prompt": "Explain Linux port conflicts",
"stream": false
}'- Launch Open WebUI
- Navigate to:
Admin Panel → Settings → Connections - Add Ollama connection:
http://127.0.0.1:11434
- Save and refresh
- Select
elion:latestfrom model dropdown
sudo lsof -i :11434
sudo killall ollama- Use
http://127.0.0.1:11434instead oflocalhost - Ensure
ollama serveis running - Disable "Cache Base Model List" in WebUI settings
- Confirm Ollama is listening:
curl http://127.0.0.1:11434/api/versionAdd these images inside /images folder:




Recommended screenshots:
- Terminal showing
ollama serve - Modelfile content
ollama create elion- WebUI connection configuration
- Chat interaction with Elion
Successfully deployed and configured a self-hosted AI assistant with:
- Custom model behavior
- Local API communication
- Web interface integration
- Fully offline operation
- Resolved real-world service and networking issues