Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 0659a07

Browse files
Fix deepsparse readme (#303)
* fix yaml identation
1 parent bb475d9 commit 0659a07

File tree

2 files changed

+16
-14
lines changed

2 files changed

+16
-14
lines changed

README.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -97,15 +97,17 @@ To look up arguments run: `deepsparse.server --help`.
9797
**⭐ Multiple Models ⭐**
9898
To serve multiple models in your deployment you can easily build a `config.yaml`. In the example below, we define two BERT models in our configuration for the question answering task:
9999

100-
models:
100+
```yaml
101+
models:
101102
- task: question_answering
102-
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
103-
batch_size: 1
104-
alias: question_answering/dense
103+
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
104+
batch_size: 1
105+
alias: question_answering/base
105106
- task: question_answering
106-
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/pruned_quant-aggressive_95
107-
batch_size: 1
108-
alias: question_answering/sparse_quantized
107+
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/pruned_quant-aggressive_95
108+
batch_size: 1
109+
alias: question_answering/pruned_quant
110+
```
109111
110112
Finally, after your `config.yaml` file is built, run the server with the config file path as an argument:
111113
```bash

src/deepsparse/server/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -89,16 +89,16 @@ __ __
8989
To serve multiple models you can build a `config.yaml` file.
9090
In the sample YAML file below, we are defining two BERT models to be served by the `deepsparse.server` for the **question answering** task:
9191

92-
```
92+
```yaml
9393
models:
9494
- task: question_answering
95-
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
96-
batch_size: 1
97-
alias: question_answering/base
95+
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
96+
batch_size: 1
97+
alias: question_answering/base
9898
- task: question_answering
99-
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/pruned_quant-aggressive_95
100-
batch_size: 1
101-
alias: question_answering/pruned_quant
99+
model_path: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/pruned_quant-aggressive_95
100+
batch_size: 1
101+
alias: question_answering/pruned_quant
102102
```
103103
You can now run the server with the config file path passed in the `--config_file` argument:
104104

0 commit comments

Comments
 (0)