Skip to content

Commit 763ab23

Browse files
committed
OLS-1790: minor style edits
1 parent 7809d5c commit 763ab23

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

modules/ols-large-language-model-requirements.adoc

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,18 +41,17 @@ To use {azure-official} with {ols-official}, you need access to link:https://azu
4141

4242
{rhelai} is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider.
4343

44-
You can configure {rhelai} as the (Large Language Model) LLM provider.
44+
You can configure {rhelai} as the LLM provider.
4545

4646
Because the {rhel} is in a different environment than the {ols-long} deployment, the model deployment must allow access using a secure connection. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/building_your_rhel_ai_environment/index#creating_secure_endpoint[Optional: Allowing access to a model from a secure endpoint].
4747

48-
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting a large language model with {rhelai}, you can use vLLM Server as the inference engine for your model deployment.
49-
48+
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhelai}, you can use vLLM Server as the inference engine for your model deployment.
5049

5150
[id="rhoai_{context}"]
5251
== {rhoai}
5352

5453
{rhoai} is OpenAI API-compatible, and is configured largely the same as the OpenAI provider.
5554

56-
You need a Large Language Model (LLM) deployed on the single model-serving platform of {rhoai} using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different {ocp-short-name} environment than the {ols-long} deployment, the model deployment must include a route to expose it outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].
55+
You need an LLM deployed on the single model-serving platform of {rhoai} using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different {ocp-short-name} environment than the {ols-long} deployment, the model deployment must include a route to expose it outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].
5756

58-
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting a large language model with {rhoai}, you can use vLLM Server as the inference engine for your model deployment.
57+
{ols-long} version 1.0 supports vLLM Server version 0.8.4. When self-hosting an LLM with {rhoai}, you can use vLLM Server as the inference engine for your model deployment.

0 commit comments

Comments
 (0)