Skip to content

feat: Add LM Studio as a local model provider alongside Ollama #740

@hghalebi

Description

@hghalebi

Feature Request

Add LM Studio as a local model provider alongside Ollama

Motivation

LM Studio demonstrates significant performance advantages over Ollama with the same models:

  • 26-30% higher tokens/second on identical hardware
  • Provides detailed performance metrics (tokens/sec, TTFT) in API responses
  • GPU offloading enables partial acceleration when models exceed VRAM
  • OpenAI-compatible API at http://localhost:1234/v1/*

For local agent deployments, this performance difference is crucial for real-time responsiveness.

Proposal

Add rig::providers::lmstudio module implementing the CompletionModel trait:

// rig-core/src/providers/lmstudio/mod.rs
pub struct Client {
    base_url: String,
    http_client: reqwest::Client,
}

impl Client {
    pub fn from_default() -> Self {
        Self::new("http://localhost:1234/v1")
    }
}

The provider would:

  1. Use OpenAI-compatible endpoints (already proven in Rig)
  2. Support streaming responses
  3. Expose LM Studio's performance metrics in responses

Metadata

Metadata

Assignees

No one assigned

    Labels

    featwontfixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions