We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
A high-throughput and memory-efficient inference and serving engine for LLMs
Python 60.6k 10.7k
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Python 2.1k 264
Common recipes to run vLLM
Jupyter Notebook 169 60
Community maintained hardware plugin for vLLM on Intel Gaudi
Intelligent Router for Mixture-of-Models
Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs
Community maintained hardware plugin for vLLM on Ascend
TPU inference for vLLM, with unified JAX and PyTorch support.
The vLLM XPU kernels for Intel GPU
Cost-efficient and pluggable Infrastructure components for GenAI inference
Community maintained hardware plugin for vLLM on Spyre
This repo hosts code for vLLM CI & Performance Benchmark infrastructure.