Skip to content

Commit d73955a

Browse files
authored
chore: remove absolute paths (#3263)
# What does this PR do? Finding these issues while moving to github pages. ## Test Plan uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all
1 parent cec00c5 commit d73955a

File tree

10 files changed

+17
-17
lines changed

10 files changed

+17
-17
lines changed

docs/source/advanced_apis/evaluation_concepts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The list of open-benchmarks we currently support:
3333
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models.
3434

3535

36-
You can follow this [contributing guide](https://llama-stack.readthedocs.io/en/latest/references/evals_reference/index.html#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
36+
You can follow this [contributing guide](../references/evals_reference/index.md#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
3737

3838
#### Run evaluation on open-benchmarks via CLI
3939

docs/source/building_applications/playground/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Interactive pages for users to play with and explore Llama Stack API capabilitie
8888
- **API Resources**: Inspect Llama Stack API resources
8989
- This page allows you to inspect Llama Stack API resources (`models`, `datasets`, `memory_banks`, `benchmarks`, `shields`).
9090
- Under the hood, it uses Llama Stack's `/<resources>/list` API to get information about each resources.
91-
- Please visit [Core Concepts](https://llama-stack.readthedocs.io/en/latest/concepts/index.html) for more details about the resources.
91+
- Please visit [Core Concepts](../../concepts/index.md) for more details about the resources.
9292

9393
### Starting the Llama Stack Playground
9494

docs/source/building_applications/responses_vs_agents.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Llama Stack (LLS) provides two different APIs for building AI applications with tool calling capabilities: the **Agents API** and the **OpenAI Responses API**. While both enable AI systems to use tools, and maintain full conversation history, they serve different use cases and have distinct characteristics.
44

55
```{note}
6-
For simple and basic inferencing, you may want to use the [Chat Completions API](https://llama-stack.readthedocs.io/en/latest/providers/index.html#chat-completions) directly, before progressing to Agents or Responses API.
6+
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](../providers/openai.md#chat-completions) directly, before progressing to Agents or Responses API.
77
```
88

99
## Overview
@@ -173,7 +173,7 @@ Both APIs demonstrate distinct strengths that make them valuable on their own fo
173173
174174
## For More Information
175175

176-
- **LLS Agents API**: For detailed information on creating and managing agents, see the [Agents documentation](https://llama-stack.readthedocs.io/en/latest/building_applications/agent.html)
176+
- **LLS Agents API**: For detailed information on creating and managing agents, see the [Agents documentation](agent.md)
177177
- **OpenAI Responses API**: For information on using the OpenAI-compatible responses API, see the [OpenAI API documentation](https://platform.openai.com/docs/api-reference/responses)
178-
- **Chat Completions API**: For the default backend API used by Agents, see the [Chat Completions providers documentation](https://llama-stack.readthedocs.io/en/latest/providers/index.html#chat-completions)
179-
- **Agent Execution Loop**: For understanding how agents process turns and steps in their execution, see the [Agent Execution Loop documentation](https://llama-stack.readthedocs.io/en/latest/building_applications/agent_execution_loop.html)
178+
- **Chat Completions API**: For the default backend API used by Agents, see the [Chat Completions providers documentation](../providers/openai.md#chat-completions)
179+
- **Agent Execution Loop**: For understanding how agents process turns and steps in their execution, see the [Agent Execution Loop documentation](agent_execution_loop.md)

docs/source/concepts/distributions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ While there is a lot of flexibility to mix-and-match providers, often users will
66

77
**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) or [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.
88

9-
**On-device Distro**: To run Llama Stack directly on an edge device (mobile phone or a tablet), we provide Distros for [iOS](https://llama-stack.readthedocs.io/en/latest/distributions/ondevice_distro/ios_sdk.html) and [Android](https://llama-stack.readthedocs.io/en/latest/distributions/ondevice_distro/android_sdk.html)
9+
**On-device Distro**: To run Llama Stack directly on an edge device (mobile phone or a tablet), we provide Distros for [iOS](../distributions/ondevice_distro/ios_sdk.md) and [Android](../distributions/ondevice_distro/android_sdk.md)

docs/source/distributions/importing_as_library.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Then, you can access the APIs like `models` and `inference` on the client and ca
2727
response = client.models.list()
2828
```
2929

30-
If you've created a [custom distribution](https://llama-stack.readthedocs.io/en/latest/distributions/building_distro.html), you can also use the run.yaml configuration file directly:
30+
If you've created a [custom distribution](building_distro.md), you can also use the run.yaml configuration file directly:
3131

3232
```python
3333
client = LlamaStackAsLibraryClient(config_path)

docs/source/distributions/k8s/apply.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,17 +22,17 @@ else
2222
fi
2323

2424
if [ -z "${GITHUB_CLIENT_ID:-}" ]; then
25-
echo "ERROR: GITHUB_CLIENT_ID not set. You need it for Github login to work. Refer to https://llama-stack.readthedocs.io/en/latest/deploying/index.html#kubernetes-deployment-guide"
25+
echo "ERROR: GITHUB_CLIENT_ID not set. You need it for Github login to work. See the Kubernetes Deployment Guide in the Llama Stack documentation."
2626
exit 1
2727
fi
2828

2929
if [ -z "${GITHUB_CLIENT_SECRET:-}" ]; then
30-
echo "ERROR: GITHUB_CLIENT_SECRET not set. You need it for Github login to work. Refer to https://llama-stack.readthedocs.io/en/latest/deploying/index.html#kubernetes-deployment-guide"
30+
echo "ERROR: GITHUB_CLIENT_SECRET not set. You need it for Github login to work. See the Kubernetes Deployment Guide in the Llama Stack documentation."
3131
exit 1
3232
fi
3333

3434
if [ -z "${LLAMA_STACK_UI_URL:-}" ]; then
35-
echo "ERROR: LLAMA_STACK_UI_URL not set. Should be set to the external URL of the UI (excluding port). You need it for Github login to work. Refer to https://llama-stack.readthedocs.io/en/latest/deploying/index.html#kubernetes-deployment-guide"
35+
echo "ERROR: LLAMA_STACK_UI_URL not set. Should be set to the external URL of the UI (excluding port). You need it for Github login to work. See the Kubernetes Deployment Guide in the Llama Stack documentation."
3636
exit 1
3737
fi
3838

docs/source/distributions/ondevice_distro/android_sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ llama stack run starter --port 5050
6666

6767
Ensure the Llama Stack server version is the same as the Kotlin SDK Library for maximum compatibility.
6868

69-
Other inference providers: [Table](https://llama-stack.readthedocs.io/en/latest/index.html#supported-llama-stack-implementations)
69+
Other inference providers: [Table](../../index.md#supported-llama-stack-implementations)
7070

7171
How to set remote localhost in Demo App: [Settings](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/examples/android_app#settings)
7272

docs/source/distributions/self_hosted_distro/meta-reference-gpu.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
orphan: true
33
---
44
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
5-
# Meta Reference Distribution
5+
# Meta Reference GPU Distribution
66

77
```{toctree}
88
:maxdepth: 2
@@ -41,7 +41,7 @@ The following environment variables can be configured:
4141

4242
## Prerequisite: Downloading Models
4343

44-
Please use `llama model list --downloaded` to check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
44+
Please use `llama model list --downloaded` to check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](../../references/llama_cli_reference/download_models.md) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
4545

4646
```
4747
$ llama model list --downloaded

docs/source/references/evals_reference/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ pprint(response)
202202

203203
Llama Stack offers a library of scoring functions and the `/scoring` API, allowing you to run evaluations on your pre-annotated AI application datasets.
204204

205-
In this example, we will work with an example RAG dataset you have built previously, label with an annotation, and use LLM-As-Judge with custom judge prompt for scoring. Please checkout our [Llama Stack Playground](https://llama-stack.readthedocs.io/en/latest/playground/index.html) for an interactive interface to upload datasets and run scorings.
205+
In this example, we will work with an example RAG dataset you have built previously, label with an annotation, and use LLM-As-Judge with custom judge prompt for scoring. Please checkout our [Llama Stack Playground](../../building_applications/playground/index.md) for an interactive interface to upload datasets and run scorings.
206206

207207
```python
208208
judge_model_id = "meta-llama/Llama-3.1-405B-Instruct-FP8"

llama_stack/distributions/meta-reference-gpu/doc_template.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
orphan: true
33
---
4-
# Meta Reference Distribution
4+
# Meta Reference GPU Distribution
55

66
```{toctree}
77
:maxdepth: 2
@@ -29,7 +29,7 @@ The following environment variables can be configured:
2929

3030
## Prerequisite: Downloading Models
3131

32-
Please use `llama model list --downloaded` to check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
32+
Please use `llama model list --downloaded` to check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](../../references/llama_cli_reference/download_models.md) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
3333

3434
```
3535
$ llama model list --downloaded

0 commit comments

Comments
 (0)