You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides
23
+
Llama Stack defines and standardizes the core building blocks that simplify AI application development. It provides a unified set of APIs with implementations from leading service providers. More specifically, it provides:
24
24
25
25
-**Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals.
26
26
-**Plugin architecture** to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
@@ -37,18 +37,19 @@ Llama Stack standardizes the core building blocks that simplify AI application d
37
37
/>
38
38
</div>
39
39
40
-
### Llama Stack Benefits
41
-
-**Flexible Options**: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
40
+
#### Llama Stack Benefits
41
+
42
+
-**Flexibility**: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
42
43
-**Consistent Experience**: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
43
-
-**Robust Ecosystem**: Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.
44
+
-**Robust Ecosystem**: Llama Stack is integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.
44
45
45
-
By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.
46
+
For more information, see the [Benefits of Llama Stack](https://llamastack.github.io/docs/v0.3.2/concepts/architecture#benefits-of-llama-stack) documentation.
46
47
47
48
### API Providers
48
49
Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack.
49
50
Please checkout for [full list](https://llamastack.github.io/docs/providers)
50
51
51
-
| API Provider Builder| Environments | Agents | Inference | VectorIO | Safety | Post Training | Eval | DatasetIO |
52
+
|API Provider | Environments | Agents | Inference | VectorIO | Safety | Post Training | Eval | DatasetIO |
@@ -81,14 +82,17 @@ Please checkout for [full list](https://llamastack.github.io/docs/providers)
81
82
82
83
### Distributions
83
84
84
-
A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code.
85
+
A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario. For example, you can begin with a local setup of Ollama and seamlessly transition to production, with fireworks, without changing your application code.
85
86
Here are some of the distributions we support:
86
87
87
88
|**Distribution**|**Llama Stack Docker**| Start This Distribution |
| Starter Distribution |[llamastack/distribution-starter](https://hub.docker.com/repository/docker/llamastack/distribution-starter/general)|[Guide](https://llamastack.github.io/latest/distributions/self_hosted_distro/starter.html)|
90
-
| Meta Reference |[llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general)|[Guide](https://llamastack.github.io/latest/distributions/self_hosted_distro/meta-reference-gpu.html)|
| Starter Distribution |[llamastack/distribution-starter](https://hub.docker.com/repository/docker/llamastack/distribution-starter/general)|[Guide](https://llamastack.github.io/latest/distributions/self_hosted_distro/starter.html)|
91
+
| Meta Reference |[llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general)|[Guide](https://llamastack.github.io/latest/distributions/self_hosted_distro/meta-reference-gpu.html)|
Check out our client SDKs for connecting to a Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [typescript](https://github.com/meta-llama/llama-stack-client-typescript), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.
118
-
119
123
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
0 commit comments