Skip to content

Commit 68b69db

Browse files
committed
OLS-1790: Document the supported vLLM version in OLS
1 parent 642f28c commit 68b69db

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

modules/ols-large-language-model-requirements.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,17 @@ To use {azure-official} with {ols-official}, you need access to link:https://azu
4141

4242
{rhelai} is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider.
4343

44-
You can configure {rhelai} as the (Large Language Model) LLM provider.
44+
You can configure {rhelai} as the LLM provider.
4545

4646
Because the {rhel} is in a different environment than the {ols-long} deployment, the model deployment must allow access using a secure connection. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/building_your_rhel_ai_environment/index#creating_secure_endpoint[Optional: Allowing access to a model from a secure endpoint].
4747

48+
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhelai}, you can use vLLM Server as the inference engine for your model deployment.
4849

4950
[id="rhoai_{context}"]
5051
== {rhoai}
5152

5253
{rhoai} is OpenAI API-compatible, and is configured largely the same as the OpenAI provider.
5354

54-
You need a Large Language Model (LLM) deployed on the single model-serving platform of {rhoai} using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different {ocp-short-name} environment than the {ols-long} deployment, the model deployment must include a route to expose it outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].
55+
You need an LLM deployed on the single model-serving platform of {rhoai} using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different {ocp-short-name} environment than the {ols-long} deployment, the model deployment must include a route to expose it outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].
56+
57+
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhoai}, you can use vLLM Server as the inference engine for your model deployment.

0 commit comments

Comments
 (0)