Skip to content

Commit 38b1c56

Browse files
committed
OLS-1790: minor edit
1 parent 68b69db commit 38b1c56

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

modules/ols-large-language-model-requirements.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ You can configure {rhelai} as the LLM provider.
4545

4646
Because the {rhel} is in a different environment than the {ols-long} deployment, the model deployment must allow access using a secure connection. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/building_your_rhel_ai_environment/index#creating_secure_endpoint[Optional: Allowing access to a model from a secure endpoint].
4747

48-
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhelai}, you can use vLLM Server as the inference engine for your model deployment.
48+
{ols-long} version 1.0 and later supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhelai}, you can use vLLM Server as the inference engine for your model deployment.
4949

5050
[id="rhoai_{context}"]
5151
== {rhoai}
@@ -54,4 +54,4 @@ Because the {rhel} is in a different environment than the {ols-long} deployment,
5454

5555
You need an LLM deployed on the single model-serving platform of {rhoai} using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different {ocp-short-name} environment than the {ols-long} deployment, the model deployment must include a route to expose it outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].
5656

57-
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhoai}, you can use vLLM Server as the inference engine for your model deployment.
57+
{ols-long} version 1.0 and later supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhoai}, you can use vLLM Server as the inference engine for your model deployment.

0 commit comments

Comments
 (0)