Skip to content

Commit 287bbbe

Browse files
authored
[Doc] Fix typo in serving docs (vllm-project#28474)
Signed-off-by: the-codeboy <[email protected]>
1 parent 3143eb2 commit 287bbbe

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/serving/openai_compatible_server.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,11 +77,11 @@ In addition, we have the following custom APIs:
7777

7878
In order for the language model to support chat protocol, vLLM requires the model to include
7979
a chat template in its tokenizer configuration. The chat template is a Jinja2 template that
80-
specifies how are roles, messages, and other chat-specific tokens are encoded in the input.
80+
specifies how roles, messages, and other chat-specific tokens are encoded in the input.
8181

8282
An example chat template for `NousResearch/Meta-Llama-3-8B-Instruct` can be found [here](https://github.com/meta-llama/llama3?tab=readme-ov-file#instruction-tuned-models)
8383

84-
Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those model,
84+
Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those models,
8585
you can manually specify their chat template in the `--chat-template` parameter with the file path to the chat
8686
template, or the template in string form. Without a chat template, the server will not be able to process chat
8787
and all chat requests will error.

0 commit comments

Comments
 (0)