Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 66.9k 12.4k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.5k 351

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 323 116

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 182 23

  5. semantic-router semantic-router Public

    System Level Intelligent Router for Mixture-of-Models

    Go 2.6k 379

Repositories

Showing 10 of 31 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 66,919 Apache-2.0 12,411 1,748 (40 issues need help) 1,352 Updated Jan 6, 2026
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 1,535 Apache-2.0 707 814 (7 issues need help) 235 Updated Jan 6, 2026
  • vllm-daily Public

    vLLM Daily Summarization of Merged PRs

    vllm-project/vllm-daily’s past year of commit activity
    35 2 0 0 Updated Jan 6, 2026
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 27 Apache-2.0 53 0 28 Updated Jan 6, 2026
  • vllm-omni Public

    A framework for efficient model inference with omni-modality models

    vllm-project/vllm-omni’s past year of commit activity
    Python 2,005 Apache-2.0 253 119 (25 issues need help) 64 Updated Jan 6, 2026
  • semantic-router Public

    System Level Intelligent Router for Mixture-of-Models

    vllm-project/semantic-router’s past year of commit activity
    Go 2,645 Apache-2.0 379 82 (13 issues need help) 40 Updated Jan 6, 2026
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 206 Apache-2.0 67 17 (1 issue needs help) 84 Updated Jan 6, 2026
  • vllm-xpu-kernels Public

    The vLLM XPU kernels for Intel GPU

    vllm-project/vllm-xpu-kernels’s past year of commit activity
    C++ 17 Apache-2.0 16 1 5 Updated Jan 6, 2026
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,530 Apache-2.0 351 74 (17 issues need help) 57 Updated Jan 6, 2026
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 784 Apache-2.0 109 46 (5 issues need help) 19 Updated Jan 6, 2026