Skip to content

adding the fiddlecube provider #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 34 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
b078440
adding the fiddlecube provider
kaushik-himself Feb 7, 2025
fbcb1b6
chore: calling api in mothership
vinikatyal Feb 7, 2025
fe03e62
chore: hard coded URL for now
vinikatyal Feb 7, 2025
bf9a733
adding prod url
kaushik-himself Feb 10, 2025
b15cf63
add safety violation code
kaushik-himself Feb 10, 2025
9bec7b6
doc: getting started notebook (#996)
ehhuang Feb 7, 2025
61f14ed
test: fix flaky agent test (#1002)
ehhuang Feb 7, 2025
70197c2
test: rm unused exception alias in pytest.raises (#991)
leseb Feb 7, 2025
47f51f2
fix: List providers command prints out non-existing APIs from registr…
terrytangyuan Feb 7, 2025
7d88da4
Delete CHANGELOG.md
ashwinb Feb 7, 2025
57a6c27
chore: add missing ToolConfig import in groq.py (#983)
leseb Feb 7, 2025
4387fb3
test: remove flaky agent test (#1006)
ehhuang Feb 7, 2025
a3d0d3c
test: Split inference tests to text and vision (#1008)
terrytangyuan Feb 7, 2025
ca80e7a
feat: Add HTTPS serving option (#1000)
ashwinb Feb 7, 2025
d4cb624
test: encode image data as base64 (#1003)
leseb Feb 7, 2025
42e47d5
fix: Ensure a better error stack trace when llama-stack is not built …
cdoern Feb 7, 2025
5de7cff
refactor(ollama): model availability check (#986)
leseb Feb 7, 2025
08aa603
Nuke use_proxy from code execution
ashwinb Feb 7, 2025
17b9e16
Minor clean up of notebook
ashwinb Feb 7, 2025
6d6a09e
No spaces in ipynb tests
ashwinb Feb 7, 2025
4922f8c
raise when client initialize fails
Feb 7, 2025
aff96cd
Bump version to 0.1.2
github-actions[bot] Feb 7, 2025
bf2e5b1
Getting started notebook update (#936)
jeffxtang Feb 7, 2025
3965464
docs: update index.md for 0.1.2 (#1013)
raghotham Feb 7, 2025
6eb0c40
test: Make text-based chat completion tests run 10x faster (#1016)
terrytangyuan Feb 8, 2025
a869902
chore: Updated requirements.txt (#1017)
cheesecake100201 Feb 8, 2025
d066798
test: Use JSON tool prompt format for remote::vllm provider (#1019)
terrytangyuan Feb 9, 2025
626c9ff
docs: Render check marks correctly on PyPI (#1024)
terrytangyuan Feb 10, 2025
5faff94
docs: update rag.md example code to prevent errors (#1009)
MichaelClifford Feb 10, 2025
9bad0a3
build: update uv lock to sync package versions (#1026)
leseb Feb 10, 2025
4452920
fix: Gaps in doc codegen (#1035)
ellistarn Feb 10, 2025
60f7510
fix: Readthedocs cannot parse comments, resulting in docs bugs (#1033)
ellistarn Feb 10, 2025
2f03a80
fix: a bad newline in ollama docs (#1036)
ellistarn Feb 10, 2025
cca7030
fix: Update Qdrant support post-refactor (#1022)
jwm4 Feb 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,7 @@ jobs:
.pre-commit-config.yaml

- uses: pre-commit/[email protected]

- name: Verify if there are any diff files after pre-commit
run: |
git diff --exit-code || (echo "There are uncommitted changes, run pre-commit locally and commit again" && exit 1)
2 changes: 1 addition & 1 deletion .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ jobs:
echo "REPORT_FILE=${REPORT_OUTPUT}" >> "$GITHUB_ENV"

export INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
LLAMA_STACK_CONFIG=./llama_stack/templates/${{ matrix.provider }}/run.yaml pytest --md-report --md-report-verbose=1 ./tests/client-sdk/inference/test_inference.py --md-report-output "$REPORT_OUTPUT"
LLAMA_STACK_CONFIG=./llama_stack/templates/${{ matrix.provider }}/run.yaml pytest --md-report --md-report-verbose=1 ./tests/client-sdk/inference/ --md-report-output "$REPORT_OUTPUT"

- name: Output reports to the job summary
if: always()
Expand Down
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ repos:
hooks:
- id: uv-export
args: ["--frozen", "--no-hashes", "--no-emit-project"]
- id: uv-sync

# - repo: https://github.com/pre-commit/mirrors-mypy
# rev: v1.14.0
Expand Down
44 changes: 0 additions & 44 deletions CHANGELOG.md

This file was deleted.

32 changes: 16 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,22 +34,22 @@ By reducing friction and complexity, Llama Stack empowers developers to focus on
### API Providers
Here is a list of the various API providers and available distributions to developers started easily,

| **API Provider Builder** | **Environments** | **Agents** | **Inference** | **Memory** | **Safety** | **Telemetry** |
|:------------------------------------------------------------------------------------------:|:----------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|
| Meta Reference | Single Node | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| SambaNova | Hosted | | :heavy_check_mark: | | | |
| Cerebras | Hosted | | :heavy_check_mark: | | | |
| Fireworks | Hosted | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
| AWS Bedrock | Hosted | | :heavy_check_mark: | | :heavy_check_mark: | |
| Together | Hosted | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | |
| Groq | Hosted | | :heavy_check_mark: | | | |
| Ollama | Single Node | | :heavy_check_mark: | | | |
| TGI | Hosted and Single Node | | :heavy_check_mark: | | | |
| NVIDIA NIM | Hosted and Single Node | | :heavy_check_mark: | | | |
| Chroma | Single Node | | | :heavy_check_mark: | | |
| PG Vector | Single Node | | | :heavy_check_mark: | | |
| PyTorch ExecuTorch | On-device iOS | :heavy_check_mark: | :heavy_check_mark: | | | |
| vLLM | Hosted and Single Node | | :heavy_check_mark: | | | |
| **API Provider Builder** | **Environments** | **Agents** | **Inference** | **Memory** | **Safety** | **Telemetry** |
|:------------------------:|:----------------------:|:----------:|:-------------:|:----------:|:----------:|:-------------:|
| Meta Reference | Single Node | | | | | |
| SambaNova | Hosted | | | | | |
| Cerebras | Hosted | | | | | |
| Fireworks | Hosted | | | ✅ | | |
| AWS Bedrock | Hosted | | | | | |
| Together | Hosted | | | | | |
| Groq | Hosted | | | | | |
| Ollama | Single Node | | | | | |
| TGI | Hosted and Single Node | | | | | |
| NVIDIA NIM | Hosted and Single Node | | | | | |
| Chroma | Single Node | | | | | |
| PG Vector | Single Node | | | | | |
| PyTorch ExecuTorch | On-device iOS | | | | | |
| vLLM | Hosted and Single Node | | | | | |

### Distributions

Expand Down
Loading