Skip to content

[Feat] allow service discovery by service names #586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

learner0810
Copy link
Contributor

@learner0810 learner0810 commented Jul 17, 2025

allow service discovery by service names

FIX ##471

python3 -m  vllm_router.app --port=8000 --k8s-port=80 --service-discovery=k8s --k8s-service-discovery-type=service-name --k8s-label-selector=release=test\  --k8s-namespace=lzj --routing-logic=session --session-key=x-user-id --engine-stats-interval=10 --log-stats 
/root/code/production-stack/src/vllm_router/parsers/parser.py:20: RuntimeWarning: Failed to read commit hash:
No module named 'vllm_router._version'
  from vllm_router.version import __version__
[2025-07-18 02:15:20,301] INFO: Scraping metrics from 0 serving engine(s) (engine_stats.py:146:vllm_router.stats.engine_stats)
[2025-07-18 02:15:20,302] INFO: Initializing session-based routing logic with kwargs: {'session_key': 'x-user-id', 'lmcache_controller_port': 9000, 'prefill_model_labels': None, 'decode_model_labels': None, 'kv_aware_threshold': 2000} (routing_logic.py:462:vllm_router.routers.routing_logic)
INFO:     Started server process [109250]
INFO:     Waiting for application startup.
[2025-07-18 02:15:20,374] INFO: Found models on service vllm-opt125m-engine-service: ['facebook/opt-125m'] (service_discovery.py:877:vllm_router.service_discovery)
[2025-07-18 02:15:20,375] INFO: Discovered new serving engine vllm-opt125m-engine-service at running models: ['facebook/opt-125m'] (service_discovery.py:962:vllm_router.service_discovery)
[2025-07-18 02:15:20,400] INFO: httpx AsyncClient instantiated. Id 140710513065712 (httpx_client.py:31:vllm_router.httpx_client)
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
[2025-07-18 02:15:30,303] INFO: Scraping metrics from 1 serving engine(s) (engine_stats.py:146:vllm_router.stats.engine_stats)
[2025-07-18 02:15:30,304] INFO: 
==================================================
Server: http://vllm-opt125m-engine-service:80
Models:
  - facebook/opt-125m
 Engine Stats: No stats available
 Request Stats: No stats available
--------------------------------------------------
==================================================
 (log_stats.py:115:vllm_router.stats.log_stats)
INFO:     127.0.0.1:48508 - "GET /v1/models HTTP/1.1" 200 OK
[2025-07-18 02:15:40,305] INFO: 
==================================================
# curl http://0.0.0.0:8000/v1/models|jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   135  100   135    0     0  39393      0 --:--:-- --:--:-- --:--:-- 45000
{
  "object": "list",
  "data": [
    {
      "id": "facebook/opt-125m",
      "object": "model",
      "created": 1752805040,
      "owned_by": "vllm",
      "root": null,
      "parent": null
    }
  ]
}

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

Feat

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

@learner0810 learner0810 changed the title [Feat] service-discovery-support-k8s-serive-name [Feat] allow service discovery by service names Jul 17, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @learner0810, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Kubernetes service discovery capabilities within the vllm_router. It introduces a new mechanism to identify and connect to backend engines using their stable Kubernetes service names, offering an alternative to the existing pod IP-based discovery. This change improves the robustness and flexibility of how the router interacts with services deployed in a Kubernetes environment.

Highlights

  • Enhanced Kubernetes Service Discovery: A new mode for Kubernetes service discovery has been implemented, allowing the vllm_router to discover backend engines by their Kubernetes service names rather than individual pod IPs. This provides a more stable and abstract way to manage backend connections.
  • New Configuration Option: A new command-line argument, --k8s-service-discovery-type, has been added. This argument allows users to explicitly choose between pod-ip (the existing default behavior) and service-name for Kubernetes service discovery.
  • Dedicated Service Discovery Class: A new class, K8sServiceNameServiceDiscovery, has been introduced. This class encapsulates the specific logic required for discovering and managing backend engines based on Kubernetes service names, including methods for checking service readiness and fetching model information.
  • Dependency Update: The psutil library has been added as a new dependency, likely to support system-level information retrieval within the new service discovery logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch from 3fa3eee to 7596830 Compare July 17, 2025 11:14
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new service discovery mechanism for Kubernetes based on service names, in addition to the existing pod IP-based discovery. This enhances flexibility in environments where direct pod IP access is not ideal. The changes include a new command-line argument to select the discovery type, a new K8sServiceNameServiceDiscovery class implementing the logic, and updates to integrate everything.

I've identified a critical bug due to a typo that would cause a runtime error, a potential design issue with how prefill/decode clients are handled, and a few smaller issues related to code clarity and dependency management. Addressing these points will improve the code's reliability and maintainability.

Comment on lines +988 to +1016
if (
self.prefill_model_labels is not None
and self.decode_model_labels is not None
):
if model_label in self.prefill_model_labels:
self.app.state.prefill_client = httpx.AsyncClient(
base_url=f"http://{engine_name}:{self.port}",
timeout=None,
)
elif model_label in self.decode_model_labels:
self.app.state.decode_client = httpx.AsyncClient(
base_url=f"http://{engine_name}:{self.port}",
timeout=None,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for setting prefill_client and decode_client on app.state will overwrite the client every time a new matching engine is discovered. If you expect to have multiple prefill or decode engines running concurrently, this will result in only the last discovered engine being used by the router.

If the design is to only have one prefill and one decode engine, consider adding a check to prevent overwriting, or logging a warning if a second one is discovered.

if model_label in self.prefill_model_labels:
    if hasattr(self.app.state, 'prefill_client'):
        logger.warning("Prefill client is already set. Overwriting with new engine: %s", engine_name)
    self.app.state.prefill_client = httpx.AsyncClient(
        base_url=f"http://{engine_name}:{self.port}",
        timeout=None,
    )
elif model_label in self.decode_model_labels:
    if hasattr(self.app.state, 'decode_client'):
        logger.warning("Decode client is already set. Overwriting with new engine: %s", engine_name)
    self.app.state.decode_client = httpx.AsyncClient(
        base_url=f"http://{engine_name}:{self.port}",
        timeout=None,
    )

@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch 2 times, most recently from aa6d725 to f4d8f58 Compare July 18, 2025 03:45
@learner0810
Copy link
Contributor Author

cc @zerofishnoodles PTAL

Copy link
Contributor

@zerofishnoodles zerofishnoodles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a super good start! But I am wondering the compatibility with the routing logic and metrics part.

For the routing logic, it seems to me it can't perform the kvaware or pd routing, but rather k8s native pod load-balancing strategy like round-robin or session based.

Also for the metrics, rn the metric service will scrape the vllm metrics from the associated pod, when it is performed in the service level, it might not get the same pod metrics as the actual one that run the inference when there are multiple replicas.

So for now, I think the scenario for this is just one pod for one service if we want to have full functionality of the production stack? Correct me if I misunderstood it.

@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch from f4d8f58 to fb04b77 Compare July 22, 2025 02:41
@learner0810
Copy link
Contributor Author

This looks like a super good start! But I am wondering the compatibility with the routing logic and metrics part.

For the routing logic, it seems to me it can't perform the kvaware or pd routing, but rather k8s native pod load-balancing strategy like round-robin or session based.

Also for the metrics, rn the metric service will scrape the vllm metrics from the associated pod, when it is performed in the service level, it might not get the same pod metrics as the actual one that run the inference when there are multiple replicas.

So for now, I think the scenario for this is just one pod for one service if we want to have full functionality of the production stack? Correct me if I misunderstood it.

Yes, you're absolutely right.

@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch from fb04b77 to 29f66b1 Compare July 23, 2025 02:03
Copy link
Contributor

@zerofishnoodles zerofishnoodles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch from efa96dc to 54421f9 Compare July 28, 2025 03:54
@zerofishnoodles
Copy link
Contributor

This looks like a super good start! But I am wondering the compatibility with the routing logic and metrics part.
For the routing logic, it seems to me it can't perform the kvaware or pd routing, but rather k8s native pod load-balancing strategy like round-robin or session based.
Also for the metrics, rn the metric service will scrape the vllm metrics from the associated pod, when it is performed in the service level, it might not get the same pod metrics as the actual one that run the inference when there are multiple replicas.
So for now, I think the scenario for this is just one pod for one service if we want to have full functionality of the production stack? Correct me if I misunderstood it.

Yes, you're absolutely right.

Hi Could you explicitly state this in the code so that others can know about this?

@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch 2 times, most recently from f6b558a to 3e0fdb3 Compare July 29, 2025 02:50
@learner0810 learner0810 force-pushed the service-discovery-support-k8s-serive-name branch from 3e0fdb3 to 9c31fb7 Compare July 29, 2025 02:52
@learner0810
Copy link
Contributor Author

This looks like a super good start! But I am wondering the compatibility with the routing logic and metrics part.
For the routing logic, it seems to me it can't perform the kvaware or pd routing, but rather k8s native pod load-balancing strategy like round-robin or session based.
Also for the metrics, rn the metric service will scrape the vllm metrics from the associated pod, when it is performed in the service level, it might not get the same pod metrics as the actual one that run the inference when there are multiple replicas.
So for now, I think the scenario for this is just one pod for one service if we want to have full functionality of the production stack? Correct me if I misunderstood it.

Yes, you're absolutely right.

Hi Could you explicitly state this in the code so that others can know about this?

Already added, please have a look

Copy link
Contributor

@zerofishnoodles zerofishnoodles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@YuhanLiu11
Copy link
Collaborator

@learner0810 thanks for the PR! I'm merging it now, can you also write a tutorial of how to use this feature? Also make it explicit in the tutorial that this assumes one pod for one service.

@YuhanLiu11 YuhanLiu11 merged commit 13f5281 into vllm-project:main Jul 29, 2025
14 checks passed
@learner0810 learner0810 deleted the service-discovery-support-k8s-serive-name branch July 30, 2025 05:34
@learner0810
Copy link
Contributor Author

@learner0810 thanks for the PR! I'm merging it now, can you also write a tutorial of how to use this feature? Also make it explicit in the tutorial that this assumes one pod for one service.

Okay, I'll write it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants