Skip to content

Conversation

@Isotr0py
Copy link
Member

  • Update DeepSeek-OCR online serving usage with custom logits processor

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Isotr0py, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the DeepSeek-OCR documentation by introducing a comprehensive guide for online OCR serving. It outlines the process of deploying DeepSeek-OCR with an OpenAI-compatible API server and provides a practical Python example, highlighting the importance of a custom logits processor for achieving superior OCR results. This addition makes the model more accessible for real-time applications.

Highlights

  • Online OCR Serving Documentation: A new section has been added to the DeepSeek-OCR.md documentation, providing detailed instructions and an example for setting up DeepSeek-OCR for online OCR serving using an OpenAI-compatible API server.
  • Custom Logits Processor Integration: The online serving example explicitly demonstrates how to use a custom NGramPerReqLogitsProcessor with vllm serve and configure it via vllm_xargs in the OpenAI client, which is crucial for optimal OCR and markdown generation performance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the DeepSeek-OCR documentation to include instructions for online serving. The changes are good, adding a new section with a command to start the server and a Python client example. I've suggested an improvement to the Python example to make it more robust and comprehensive by setting the temperature for deterministic outputs and including the whitelist_token_ids parameter to showcase more of the vllm_xargs functionality, aligning it with the offline example.

Comment on lines 104 to 116
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-OCR",
messages=messages,
max_tokens=2048,
extra_body={
"skip_special_tokens": False,
# args used to control custom logits processor
"vllm_xargs": {
"ngram_size": 30,
"window_size": 90,
},
},
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better reproducibility and to make the example more comprehensive, I suggest a couple of improvements:

  1. Set temperature: For OCR tasks, deterministic output is usually desired. Setting temperature=0.0 ensures this and aligns with the offline example.
  2. Show more vllm_xargs: To provide a more complete example, it would be helpful to include whitelist_token_ids in vllm_xargs, similar to the offline inference example. This demonstrates how to pass list-based arguments.
Suggested change
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-OCR",
messages=messages,
max_tokens=2048,
extra_body={
"skip_special_tokens": False,
# args used to control custom logits processor
"vllm_xargs": {
"ngram_size": 30,
"window_size": 90,
},
},
)
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-OCR",
messages=messages,
max_tokens=2048,
temperature=0.0,
extra_body={
"skip_special_tokens": False,
# args used to control custom logits processor
"vllm_xargs": {
"ngram_size": 30,
"window_size": 90,
"whitelist_token_ids": [128821, 128822], # whitelist: <td>, </td>
},
},
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you verify this?

Copy link
Member Author

@Isotr0py Isotr0py Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whitelist_token_ids is rejected by field validation, because vllm_xargs only allow string and number currently, need to wait vllm-project/vllm#27560 to allow using list inside vllm_xargs

openai.BadRequestError: Error code: 400 - {'error': {'message': "[{'type': 'string_type', 'loc': ('body', 'vllm_xargs', 'whitelist_token_ids', 'str'), 'msg': 'Input should be a valid string', 'input': [128821, 128822]}, {'type': 'int_type', 'loc': ('body', 'vllm_xargs', 'whitelist_token_ids', 'int'), 'msg': 'Input should be a valid integer', 'input': [128821, 128822]}, {'type': 'float_type', 'loc': ('body', 'vllm_xargs', 'whitelist_token_ids', 'float'), 'msg': 'Input should be a valid number', 'input': [128821, 128822]}]", 'type': 'Bad Request', 'param': None, 'code': 400}}

Signed-off-by: Isotr0py <[email protected]>
@ywang96 ywang96 merged commit 9d26cee into vllm-project:main Nov 3, 2025
2 checks passed
@Isotr0py Isotr0py deleted the deepseek-ocr-online branch November 4, 2025 04:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants