A custom transformer-based classifier that predicts the semantic complexity of user prompts — trained to help intelligently route prompts to the most suitable LLM (Large Language Model).
This is the model selection brain behind Nebriq — a minimalist, AI-powered note-taking app with a strong focus on AI-assisted workflows.
Instead of hardcoding logic like:
if "analyze" in prompt:
use("gpt-4")…this classifier understands the prompt and determines whether it’s:
- simple: small talk, casual, or factual
- medium: instructional, contextual, or practical
- advanced: abstract, creative, analytical, or deeply technical
Users shouldn’t need to know which model is “best” for a task.
This classifier allows Nebriq to automatically route user prompts to the right LLM backend (e.g., GPT-4o-mini, Claude 3, DeepSeek, etc.) depending on semantic intent and complexity, making the UX faster, cheaper, and smarter.
- Trained on a small curated dataset of user-like prompts
- Built using 🤗 Transformers (distilbert-base-uncased)
- Uses the Hugging Face Trainer API
- Includes support for inference, training, and evaluation
from transformers import pipeline
classifier = pipeline("text-classification", model="paulbg/nebriq-model-classifier")
prompt = "Model the economic effects of inflation on developing countries"
res = classifier(prompt)
print(res)
# → [{'label': 'advanced', 'score': 0.94}]poetry install
poetry run python scripts/train.py # for training
poetry run python scripts/inference.py # for inference- Prompt routing inside AI-native tools (like Nebriq)
- Custom moderation / filter / rerouting layers
- “AI load balancer” for hybrid LLM backends
This project is licensed under the MIT License. See the LICENSE file for details.