Skip to content

Commit a5a0518

Browse files
authored
docs: show complete prompts.yml content in getting started tutorial (#1311)
1 parent b574cec commit a5a0518

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/getting-started.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,8 @@ The sample code uses the [Llama 3.3 70B Instruct model](https://build.nvidia.com
3636
$ export NVIDIA_API_KEY=<nvapi-...>
3737
```
3838

39-
1. Create a _configuration store_ directory, such as `config` and add a `config/config.yml` file with the following contents:
39+
1. Create a _configuration store_ directory, such as `config`.
40+
2. Copy the following configuration code and save as `config.yml` in the `config` directory.
4041

4142
```{literalinclude} ../examples/configs/gs_content_safety/config/config.yml
4243
:language: yaml
@@ -45,22 +46,21 @@ The sample code uses the [Llama 3.3 70B Instruct model](https://build.nvidia.com
4546
The `models` key in the `config.yml` file configures the LLM model.
4647
For more information about the key, refer to [](./user-guides/configuration-guide.md#the-llm-model).
4748

48-
1. Create a prompts file, such as `config/prompts.yml`, ([download](path:../examples/configs/gs_content_safety/config/prompts.yml)), with contents like the following partial example:
49+
3. Copy the following prompts code and save as `prompts.yml` in the `config` directory.
4950

5051
```{literalinclude} ../examples/configs/gs_content_safety/config/prompts.yml
5152
:language: yaml
52-
:lines: 1-15
5353
```
5454

55-
1. Load the guardrails configuration:
55+
4. Load the guardrails configuration:
5656

5757
```{literalinclude} ../examples/configs/gs_content_safety/demo.py
5858
:language: python
5959
:start-after: "# start-load-config"
6060
:end-before: "# end-load-config"
6161
```
6262

63-
1. Generate a response:
63+
5. Generate a response:
6464

6565
```{literalinclude} ../examples/configs/gs_content_safety/demo.py
6666
:language: python
@@ -76,7 +76,7 @@ The sample code uses the [Llama 3.3 70B Instruct model](https://build.nvidia.com
7676
:end-before: "# end-generate-response"
7777
```
7878

79-
1. Send a safe request and generate a response:
79+
6. Send a safe request and generate a response:
8080

8181
```{literalinclude} ../examples/configs/gs_content_safety/demo.py
8282
:language: python

0 commit comments

Comments
 (0)