Skip to content

Commit d03742f

Browse files
committed
docs: Contributor guidelines for creating Internal or External providers
1 parent 19123ca commit d03742f

File tree

1 file changed

+7
-0
lines changed

1 file changed

+7
-0
lines changed

docs/source/contributing/new_api_provider.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,13 @@ Here are some example PRs to help you get started:
1414
- [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355)
1515
- [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665)
1616

17+
## Guidelines for creating Internal or External Providers
18+
19+
|**Type** |Internal (In-tree) |External (out-of-tree)
20+
|---------|-------------------|---------------------|
21+
|**Description** |A provider that is directly in the Llama Stack code|A provider that os outside of the Llama stack core codebase but is still accessible and usable by Llama Stack.
22+
|**Benefits** |Ability to interact with the provider with minimal additional configurations or installations| Contributors do not have to add directly to the code to create providers accessible on Llama Stack. Keep provider-specific code separate from the core Llama Stack code.
23+
1724
## Inference Provider Patterns
1825

1926
When implementing Inference providers for OpenAI-compatible APIs, Llama Stack provides several mixin classes to simplify development and ensure consistent behavior across providers.

0 commit comments

Comments
 (0)