Skip to content

Conversation

@offline893
Copy link
Contributor

@offline893 offline893 commented Oct 25, 2025

What this PR does / why we need it?

Add developer guide of eplb

Does this PR introduce any user-facing change?

How was this patch tested?

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Oct 25, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new developer guide for the Expert Parallelism Load Balancer (EPLB). While adding documentation is a valuable contribution, the current version of the guide contains several significant errors that could mislead developers. I have identified issues such as references to non-existent parameters, incorrect file paths, incorrect language-specific terminology (using try-catch for Python), and a typo in a critical environment variable. Correcting these is important for the documentation to be accurate and useful. My review provides specific suggestions for each of these points.

In other cases, we use the global load balancing policy, which replicates experts globally regardless of expert groups, and packs the replicated experts onto individual GPUs. This policy can be adopted in the decoding stage with a larger expert-parallel size.

### Add a New MoE Model
When adding a new model, inherit or modify `VllmEplbAdaptor`. Add the processing logic for `num_dense_layers`, `global_expert_num`, and `num_roe_layers`, and synchronize the relevant logic within the `model_register` function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The documentation mentions num_roe_layers as a parameter to handle when adding a new model. However, this parameter does not appear to be used in the related implementation files (vllm_ascend/eplb/adaptor/vllm_adaptor.py, vllm_ascend/eplb/utils.py). This is misleading for developers. Please remove it or clarify its purpose.

Suggested change
When adding a new model, inherit or modify `VllmEplbAdaptor`. Add the processing logic for `num_dense_layers`, `global_expert_num`, and `num_roe_layers`, and synchronize the relevant logic within the `model_register` function.
When adding a new model, inherit or modify `VllmEplbAdaptor`. Add the processing logic for `num_dense_layers` and `global_expert_num`, and synchronize the relevant logic within the `model_register` function.


### Add a New MoE Model
When adding a new model, inherit or modify `VllmEplbAdaptor`. Add the processing logic for `num_dense_layers`, `global_expert_num`, and `num_roe_layers`, and synchronize the relevant logic within the `model_register` function.
If you want to add MoE-related processing to the model, add corresponding methods to `VLLM/EPLB/utils` and add patch logic in the `model_register` function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The file path VLLM/EPLB/utils is incorrect. Based on the project structure, the correct path appears to be vllm_ascend/eplb/utils.py. Providing an incorrect path can mislead developers trying to extend the functionality.

Suggested change
If you want to add MoE-related processing to the model, add corresponding methods to `VLLM/EPLB/utils` and add patch logic in the `model_register` function.
If you want to add MoE-related processing to the model, add corresponding methods to `vllm_ascend/eplb/utils.py` and add patch logic in the `model_register` function.

All EPLB parameters must be initialized by default during initialization, with specified parameter types and default values for proper handling.

#### General Functions
All method arguments must specify parameter types and default values, and functions must include default return value handling for default arguments. It is recommended to use `try-catch` blocks to handle the function body, specifying the type of exception captured and the failure handling (e.g., logging exceptions or returning a failure status).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The documentation recommends using try-catch blocks for exception handling. However, in Python, the correct syntax is try-except. This should be corrected to avoid confusion for Python developers.

Suggested change
All method arguments must specify parameter types and default values, and functions must include default return value handling for default arguments. It is recommended to use `try-catch` blocks to handle the function body, specifying the type of exception captured and the failure handling (e.g., logging exceptions or returning a failure status).
All method arguments must specify parameter types and default values, and functions must include default return value handling for default arguments. It is recommended to use `try-except` blocks to handle the function body, specifying the type of exception captured and the failure handling (e.g., logging exceptions or returning a failure status).


## Limitation
Before using EPLB, start the script and add `export DYNAMIC_EPLB="true"`.
Before performing load data collection (or performance data collection), start the script and add `export EXPORT_MAP_RECORD="true"`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a typo in the environment variable name. The documentation specifies EXPORT_MAP_RECORD, but the code in vllm_ascend/eplb/core/eplb_utils.py checks for EXPERT_MAP_RECORD. A developer following the documentation would encounter an error. This should be corrected.

Suggested change
Before performing load data collection (or performance data collection), start the script and add `export EXPORT_MAP_RECORD="true"`.
Before performing load data collection (or performance data collection), start the script and add `export EXPERT_MAP_RECORD="true"`.

# Expert Parallelism Load Balancer (EPLB)

## Why We Need EPLB?
When using Expert Parallelism (EP), different experts are assigned to different GPUs/NPUs. Given that the load of various experts may vary depending on the current workload, it is crucial to maintain balanced loads across different GPUs/NPUs. We adopt a redundant experts strategy by duplicating heavily-loaded experts. Then, we heuristically pack these duplicated experts onto GPUs to ensure load balancing across them. Moreover, thanks to the group-limited expert routing used in MoE models, we also attempt to place experts of the same group on the same node to reduce inter-node data traffic, whenever possible.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change all GPU to NPU

## Why We Need EPLB?
When using Expert Parallelism (EP), different experts are assigned to different GPUs/NPUs. Given that the load of various experts may vary depending on the current workload, it is crucial to maintain balanced loads across different GPUs/NPUs. We adopt a redundant experts strategy by duplicating heavily-loaded experts. Then, we heuristically pack these duplicated experts onto GPUs to ensure load balancing across them. Moreover, thanks to the group-limited expert routing used in MoE models, we also attempt to place experts of the same group on the same node to reduce inter-node data traffic, whenever possible.

To facilitate reproduction and deployment, we open-source our deployed EP load balancing algorithm in `vllm_ascend/eplb/core/policy`. The algorithm computes a balanced expert replication and placement plan based on the estimated expert loads. Note that the exact method for predicting expert loads is outside the scope of this repository. A common method is to use a moving average of historical statistics.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to mention open-source. It can be somthing like: vLLM Ascend supported xxx

Please refer to the EPLB section of the user guide for detailed information: [How to Use EPLB](../../user_guide/feature_guide/eplb_swift_balancer.md)

## How It Works?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add more about the module design. For example, what's EplbUpdator, EplbWorker, etc, and how they work?

## How It Works?

### Default Algorithm
#### Hierarchical Load Balancing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add a section to descrbie how to register a new algorithm for developers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants