-
Notifications
You must be signed in to change notification settings - Fork 74
Add online serving usage with custom logits processor for DeepSeek-OCR #101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Isotr0py
commented
Oct 27, 2025
- Update DeepSeek-OCR online serving usage with custom logits processor
Signed-off-by: Isotr0py <[email protected]>
Summary of ChangesHello @Isotr0py, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the DeepSeek-OCR documentation by introducing a comprehensive guide for online OCR serving. It outlines the process of deploying DeepSeek-OCR with an OpenAI-compatible API server and provides a practical Python example, highlighting the importance of a custom logits processor for achieving superior OCR results. This addition makes the model more accessible for real-time applications. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the DeepSeek-OCR documentation to include instructions for online serving. The changes are good, adding a new section with a command to start the server and a Python client example. I've suggested an improvement to the Python example to make it more robust and comprehensive by setting the temperature for deterministic outputs and including the whitelist_token_ids parameter to showcase more of the vllm_xargs functionality, aligning it with the offline example.
| response = client.chat.completions.create( | ||
| model="deepseek-ai/DeepSeek-OCR", | ||
| messages=messages, | ||
| max_tokens=2048, | ||
| extra_body={ | ||
| "skip_special_tokens": False, | ||
| # args used to control custom logits processor | ||
| "vllm_xargs": { | ||
| "ngram_size": 30, | ||
| "window_size": 90, | ||
| }, | ||
| }, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better reproducibility and to make the example more comprehensive, I suggest a couple of improvements:
- Set temperature: For OCR tasks, deterministic output is usually desired. Setting
temperature=0.0ensures this and aligns with the offline example. - Show more
vllm_xargs: To provide a more complete example, it would be helpful to includewhitelist_token_idsinvllm_xargs, similar to the offline inference example. This demonstrates how to pass list-based arguments.
| response = client.chat.completions.create( | |
| model="deepseek-ai/DeepSeek-OCR", | |
| messages=messages, | |
| max_tokens=2048, | |
| extra_body={ | |
| "skip_special_tokens": False, | |
| # args used to control custom logits processor | |
| "vllm_xargs": { | |
| "ngram_size": 30, | |
| "window_size": 90, | |
| }, | |
| }, | |
| ) | |
| response = client.chat.completions.create( | |
| model="deepseek-ai/DeepSeek-OCR", | |
| messages=messages, | |
| max_tokens=2048, | |
| temperature=0.0, | |
| extra_body={ | |
| "skip_special_tokens": False, | |
| # args used to control custom logits processor | |
| "vllm_xargs": { | |
| "ngram_size": 30, | |
| "window_size": 90, | |
| "whitelist_token_ids": [128821, 128822], # whitelist: <td>, </td> | |
| }, | |
| }, | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you verify this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whitelist_token_ids is rejected by field validation, because vllm_xargs only allow string and number currently, need to wait vllm-project/vllm#27560 to allow using list inside vllm_xargs
openai.BadRequestError: Error code: 400 - {'error': {'message': "[{'type': 'string_type', 'loc': ('body', 'vllm_xargs', 'whitelist_token_ids', 'str'), 'msg': 'Input should be a valid string', 'input': [128821, 128822]}, {'type': 'int_type', 'loc': ('body', 'vllm_xargs', 'whitelist_token_ids', 'int'), 'msg': 'Input should be a valid integer', 'input': [128821, 128822]}, {'type': 'float_type', 'loc': ('body', 'vllm_xargs', 'whitelist_token_ids', 'float'), 'msg': 'Input should be a valid number', 'input': [128821, 128822]}]", 'type': 'Bad Request', 'param': None, 'code': 400}}
Signed-off-by: Isotr0py <[email protected]>