Skip to content

docs(hps): add Triton instance count configuration docs#17765

Merged
Bobholamovic merged 9 commits intoPaddlePaddle:mainfrom
scyyh11:docs/add-triton-instance-count-docs
Mar 9, 2026
Merged

docs(hps): add Triton instance count configuration docs#17765
Bobholamovic merged 9 commits intoPaddlePaddle:mainfrom
scyyh11:docs/add-triton-instance-count-docs

Conversation

@scyyh11
Copy link
Collaborator

@scyyh11 scyyh11 commented Mar 4, 2026

Add documentation for Triton instance_group count parameter in both Chinese and English READMEs, covering configuration examples and tuning guidance for GPU and CPU models.

Add documentation for Triton `instance_group` count parameter in both
Chinese and English READMEs, covering configuration examples and
tuning guidance for GPU and CPU models.
@scyyh11 scyyh11 requested a review from Bobholamovic March 4, 2026 09:08
@paddle-bot
Copy link

paddle-bot bot commented Mar 4, 2026

Thanks for your contribution!

Bobholamovic and others added 4 commits March 5, 2026 11:38
Rewrite instance count docs to accurately describe behavior: single
instance processes one batch at a time with intra-batch blocking,
while multiple instances enable parallel batch processing to reduce
queuing latency.
Clarify that Triton only loads the layout detection model
(PP-DocLayoutV3) while VLM is served by vLLM separately. Note that
multiple instances improve per-request latency, and that additional
instances also increase vLLM load, CPU and memory usage.
…ure section

Add a note under the Triton models table clarifying that only the
layout detection model (PP-DocLayoutV3) runs in Triton, while VLM
is served separately by vLLM.
| `layout-parsing` | 推理设备(如 GPU) | 版面解析推理 |
| `restructure-pages` | CPU | 多页结果后处理(跨页表格合并、标题层级重分配) |

> 注意:Triton 服务中仅加载了版面检测模型(PP-DocLayoutV3),VLM 模型由独立的 vLLM 推理服务提供。
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个感觉放在正文里说明更好,比如放在上面的表格里,Triton 服务器里包含版面检测模型以及产线串联逻辑,vLLM 服务器里包含VLM(VLM的“M”就是“模型”,不必重复)


### Triton 实例数

Triton 服务中仅加载了版面检测模型(PP-DocLayoutV3),VLM 模型由独立的 vLLM 推理服务提供。每个 Triton 模型的并行推理实例数通过 `config.pbtxt` 中的 `instance_group` 配置(默认:1)。增加实例数可以提高并行处理能力,但会占用更多设备资源。
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议去掉第一句

Integrate layout detection model (PP-DocLayoutV3) and VLM info into
the architecture component table instead of a separate note. Remove
redundant first sentence in Triton instance count section.
实例数与动态批处理之间存在权衡:

- **单实例(`count: 1`)**:动态批处理会将多个请求合并为一个批次并行执行,但同批次的请求需等待最慢的那个完成后才能一起返回,可能导致部分请求的时延升高。同时,单实例同一时刻只能处理一个批次,当前批次未完成时后续请求只能排队等待。适合显存有限或请求耗时较均匀的场景
- **多实例(`count: 2+`)**:多个实例可以同时各自处理不同的批次,能够同时处理更多请求,减少排队等待时间,单个请求的时延也会有所改善。但需注意,同一实例内的批次仍然遵循动态批处理的行为(批内请求一起开始、一起结束)。每增加一个实例会额外占用一份版面检测模型(PP-DocLayoutV3)的显存,同时也会增加对 vLLM 推理服务的负载以及内存和 CPU 的使用,需根据推理设备的资源情况酌情设置
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

版面检测模型(PP-DocLayoutV3)这里可以不用再标注一次“PP-DocLayoutV3“,这样之后更换模型也方便一些,不容易遗漏修改

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vLLM 推理服务同样建议改成VLM推理服务,更通用

Remove redundant model name annotation and use "VLM" instead of "vLLM"
for better generality per review feedback.
There is a trade-off between instance count and dynamic batching:

- **Single instance (`count: 1`)**: Dynamic batching combines multiple requests into one batch for parallel execution, but all requests in the same batch must wait for the slowest one to finish before results are returned, which may increase latency for faster requests. Additionally, a single instance can only process one batch at a time — subsequent requests must queue until the current batch completes. Best suited for scenarios with limited GPU memory or uniform request processing times
- **Multiple instances (`count: 2+`)**: Multiple instances can process different batches simultaneously, allowing more requests to be handled concurrently. This reduces queuing time and improves latency for individual requests. Note that within each instance, dynamic batching behavior still applies (requests in the same batch start and finish together). Each additional instance consumes an extra copy of the layout detection model (PP-DocLayoutV3)'s GPU memory, increases the load on the vLLM inference service, and uses more CPU and system memory. Adjust based on the available resources of your inference device
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

英文文档也需要同步修改

@Bobholamovic Bobholamovic merged commit c9a1d6c into PaddlePaddle:main Mar 9, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants