forked from llamastack/llama-stack
-
Notifications
You must be signed in to change notification settings - Fork 0
adding the fiddlecube provider #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
kaushik-himself
wants to merge
34
commits into
main
Choose a base branch
from
fiddlecube-safety
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
also cleans up print messages
also improved error handling. used to get fields to prevent field errors. process safety violation sent from the server and return a response
# What does this PR do? Fix link ## Test Plan <!-- Please describe: - tests you ran to verify your changes with result summaries. - provide instructions so it can be reproduced. --> <!-- ## Sources Please link relevant resources if necessary. --> <!-- ## Documentation - [ ] Added a [Changelog](https://github.com/meta-llama/llama-stack/blob/main/CHANGELOG.md) entry if the change is significant (new feature, breaking change etc.). -->
Summary: Test Plan: LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/ --safety-shield meta-llama/Llama-Guard-3-8 all tests passed
# What does this PR do? Refactored tests by removing unused exception alias (as exc_info) in pytest.raises, improving code clarity and reducing lint warnings. exc_info was never used. Signed-off-by: Sébastien Han <[email protected]> ## Test Plan Please describe: - tests you ran to verify your changes with result summaries. - provide instructions so it can be reproduced. ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. Signed-off-by: Sébastien Han <[email protected]>
Fixes llamastack#966 (llamastack#969) Fixes llamastack#966. Verified that: 1. Correct list of APIs are printed out when running `llama stack list-providers` 2. `llama stack list-providers <api>` works as expected. --------- Signed-off-by: Yuan Tang <[email protected]>
We use weekly releases as a way to communicate important improvements. Keeping this information synced across is more overhead than we have bandwidth for right now. We may change this process over time.
# What does this PR do? Imported `ToolConfig` from the `llama_stack.apis.inference` module to resolve missing reference and ensure proper functionality within the `groq.py` file. Signed-off-by: Sébastien Han <[email protected]> ## Test Plan Without the change, pytest will run with the following error: ``` uv run pytest -v -s -k "ollama" llama_stack/providers/tests/ /Users/leseb/Documents/AI/llama-stack/.venv/lib/python3.13/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset. The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session" warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET)) ============================================ test session starts ============================================= platform darwin -- Python 3.13.1, pytest-8.3.4, pluggy-1.5.0 -- /Users/leseb/Documents/AI/llama-stack/.venv/bin/python3 cachedir: .pytest_cache metadata: {'Python': '3.13.1', 'Platform': 'macOS-15.3-arm64-arm-64bit-Mach-O', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'html': '4.1.1', 'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'nbval': '0.11.0'}} rootdir: /Users/leseb/Documents/AI/llama-stack configfile: pyproject.toml plugins: html-4.1.1, metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, nbval-0.11.0 asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None collected 379 items / 1 error / 349 deselected / 30 selected =================================================== ERRORS =================================================== __________________ ERROR collecting llama_stack/providers/tests/inference/groq/test_init.py __________________ llama_stack/providers/tests/inference/groq/test_init.py:11: in <module> from llama_stack.providers.remote.inference.groq.groq import GroqInferenceAdapter llama_stack/providers/remote/inference/groq/groq.py:72: in <module> class GroqInferenceAdapter(Inference, ModelRegistryHelper, NeedsRequestProviderData): llama_stack/providers/remote/inference/groq/groq.py:102: in GroqInferenceAdapter tool_config: Optional[ToolConfig] = None, E NameError: name 'ToolConfig' is not defined ========================================== short test summary info =========================================== ERROR llama_stack/providers/tests/inference/groq/test_init.py - NameError: name 'ToolConfig' is not defined !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! =============================== 349 deselected, 22 warnings, 1 error in 0.28s ================================ ``` With the change the test continues to run and fails with a different error: ``` uv run pytest -v -s llama_stack/providers/tests/ /Users/leseb/Documents/AI/llama-stack/.venv/lib/python3.13/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset. The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session" warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET)) ============================================ test session starts ============================================= platform darwin -- Python 3.13.1, pytest-8.3.4, pluggy-1.5.0 -- /Users/leseb/Documents/AI/llama-stack/.venv/bin/python3 cachedir: .pytest_cache metadata: {'Python': '3.13.1', 'Platform': 'macOS-15.3-arm64-arm-64bit-Mach-O', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'html': '4.1.1', 'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'nbval': '0.11.0'}} rootdir: /Users/leseb/Documents/AI/llama-stack configfile: pyproject.toml plugins: html-4.1.1, metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, nbval-0.11.0 asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None collected 342 items / 1 error =================================================== ERRORS =================================================== ______________ ERROR collecting llama_stack/providers/tests/inference/test_vision_inference.py _______________ llama_stack/providers/tests/inference/test_vision_inference.py:29: in <module> class TestVisionModelInference: llama_stack/providers/tests/inference/test_vision_inference.py:35: in TestVisionModelInference ImageContentItem(image=dict(data=PASTA_IMAGE)), E pydantic_core._pydantic_core.ValidationError: 1 validation error for ImageContentItem E image.data E Input should be a valid string, unable to parse raw data as a unicode string [type=string_unicode, input_value=b'\xff\xd8\xff\xe0\x00\x1...0\xe6\x9f5\xb5?\xff\xd9', input_type=bytes] E For further information visit https://errors.pydantic.dev/2.10/v/string_unicode ========================================== short test summary info =========================================== ERROR llama_stack/providers/tests/inference/test_vision_inference.py - pydantic_core._pydantic_core.ValidationError: 1 validation error for ImageContentItem !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ======================================= 22 warnings, 1 error in 0.25s ======================================== ``` Which is fixed in llamastack#1003. ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. Signed-off-by: Sébastien Han <[email protected]>
Summary: Test Plan:
# What does this PR do? This PR splits the inference tests into text and vision to make testing on vLLM provider easier as mentioned in llamastack#951 since serving multiple models (e.g. Llama-3.2-11B-Vision-Instruct and Llama-3.1-8B-Instruct) on a single port using the OpenAI API is [not supported yet](https://docs.vllm.ai/en/v0.5.5/serving/faq.html) so it's a bit tricky to test both at the same time. ## Test Plan All previously passing tests related to text still pass: `LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_text_inference.py` All vision tests passed via `LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_vision_inference.py`. Signed-off-by: Yuan Tang <[email protected]>
# What does this PR do? Enables HTTPS option for Llama Stack. While doing so, introduces a `ServerConfig` sub-structure to house all server related configuration (port, ssl, etc.) Also simplified the `start_container.sh` entrypoint to simply be `python` instead of a complex bash command line. ## Test Plan Conda: Run: ```bash $ llama stack build --template together $ llama stack run --port 8322 # ensure server starts $ llama-stack-client configure --endpoint http://localhost:8322 $ llama-stack-client models list ``` Create a self-signed SSL key / cert pair. Then, using a local checkout of `llama-stack-client-python`, change https://github.com/meta-llama/llama-stack-client-python/blob/main/src/llama_stack_client/_base_client.py#L759 to add `kwargs.setdefault("verify", False)` so SSL verification is disabled. Then: ```bash $ llama stack run --port 8322 --tls-keyfile <KEYFILE> --tls-certfile <CERTFILE> $ llama-stack-client configure --endpoint https://localhost:8322 # notice the `https` $ llama-stack-client models list ``` Also tested with containers (but of course one needs to make sure the cert and key files are appropriately provided to the container.)
# What does this PR do? Previously, the test was failing due to a pydantic validation error caused by passing raw binary image data instead of a valid Unicode string. This fix encodes the image data as base64, ensuring it is a valid string format compatible with `ImageContentItem`. Error: ``` ______________ ERROR collecting llama_stack/providers/tests/inference/test_vision_inference.py _______________ llama_stack/providers/tests/inference/test_vision_inference.py:31: in <module> class TestVisionModelInference: llama_stack/providers/tests/inference/test_vision_inference.py:37: in TestVisionModelInference ImageContentItem(image=dict(data=PASTA_IMAGE)), E pydantic_core._pydantic_core.ValidationError: 1 validation error for ImageContentItem E image.data E Input should be a valid string, unable to parse raw data as a unicode string [type=string_unicode, input_value=b'\xff\xd8\xff\xe0\x00\x1...0\xe6\x9f5\xb5?\xff\xd9', input_type=bytes] E For further information visit https://errors.pydantic.dev/2.10/v/string_unicode ``` Signed-off-by: Sébastien Han <[email protected]> [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan Execute the following: ``` ollama run llama3.2-vision --keepalive 2m & uv run pytest -v -s -k "ollama" --inference-model=llama3.2-vision:latest llama_stack/providers/tests/inference/test_vision_inference.py llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_non_streaming[-ollama-image0-expected_strings0] PASSED llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_non_streaming[-ollama-image1-expected_strings1] FAILED llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_streaming[-ollama] FAILED ``` The last two tests are failing because Cloudflare blocked me from accessing https://www.healthypawspetinsurance.com/Images/V3/DogAndPuppyInsurance/Dog_CTA_Desktop_HeroImage.jpg but this has no impact on the current fix. [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) Signed-off-by: Sébastien Han <[email protected]>
…lamastack#950) # What does this PR do? currently this is the output when you run a distribution locally without running `llama stack build`: ``` Traceback (most recent call last): File "/Users/charliedoern/Documents/llama-sdk.py", line 25, in <module> models = client.models.list() ^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 107, in list raise exc File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 95, in list return self._get( ^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/_base_client.py", line 1212, in get return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 168, in request return asyncio.run(self.async_client.request(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 258, in request if not self.endpoint_impls: ^^^^^^^^^^^^^^^^^^^ AttributeError: 'AsyncLlamaStackAsLibraryClient' object has no attribute 'endpoint_impls' ``` the intended exception is never raised, add an except for an AttributeError so users can catch when they call things like `models.list()` and so that a more useful error telling them that the client is not properly initialized is printed. ## Test Plan Please describe: - I ran the script found here: https://llama-stack.readthedocs.io/en/latest/getting_started/index.html#run-inference-with-python-sdk locally with the changes in this PR and the exception was caught successfully. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. --------- Signed-off-by: Charlie Doern <[email protected]> Co-authored-by: Ashwin Bharambe <[email protected]>
# What does this PR do? Moved model availability check logic into a dedicated check_model_availability function. Eliminated redundant code by reusing the helper function in both embedding and non-embedding model registration. Signed-off-by: Sébastien Han <[email protected]> ## Test Plan Run Ollama and serve 2 models to get most the unit test pass: ``` ollama run llama3.2:3b-instruct-fp16 --keepalive 2m & ollama run llama3.1:8b --keepalive 2m & ``` Run the unit test: ``` uv run pytest -v -k "ollama" --inference-model=llama3.2:3b-instruct-fp16 llama_stack/providers/tests/inference/test_model_registration.py /Users/leseb/Documents/AI/llama-stack/.venv/lib/python3.13/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset. The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session" warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET)) ============================================ test session starts ============================================= platform darwin -- Python 3.13.1, pytest-8.3.4, pluggy-1.5.0 -- /Users/leseb/Documents/AI/llama-stack/.venv/bin/python3 cachedir: .pytest_cache metadata: {'Python': '3.13.1', 'Platform': 'macOS-15.3-arm64-arm-64bit-Mach-O', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'html': '4.1.1', 'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'nbval': '0.11.0'}} rootdir: /Users/leseb/Documents/AI/llama-stack configfile: pyproject.toml plugins: html-4.1.1, metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, nbval-0.11.0 asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None collected 65 items / 60 deselected / 5 selected llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_unsupported_model[-ollama] PASSED [ 20%] llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_nonexistent_model[-ollama] PASSED [ 40%] llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_with_llama_model[-ollama] FAILED [ 60%] llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_initialize_model_during_registering[-ollama] FAILED [ 80%] llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_with_invalid_llama_model[-ollama] PASSED [100%] ================================================== FAILURES ================================================== _______________________ TestModelRegistration.test_register_with_llama_model[-ollama] ________________________ llama_stack/providers/tests/inference/test_model_registration.py:54: in test_register_with_llama_model _ = await models_impl.register_model( llama_stack/providers/utils/telemetry/trace_protocol.py:91: in async_wrapper result = await method(self, *args, **kwargs) llama_stack/distribution/routers/routing_tables.py:245: in register_model registered_model = await self.register_object(model) llama_stack/distribution/routers/routing_tables.py:192: in register_object registered_obj = await register_object_with_provider(obj, p) llama_stack/distribution/routers/routing_tables.py:53: in register_object_with_provider return await p.register_model(obj) llama_stack/providers/utils/telemetry/trace_protocol.py:91: in async_wrapper result = await method(self, *args, **kwargs) llama_stack/providers/remote/inference/ollama/ollama.py:368: in register_model await check_model_availability(model.provider_resource_id) llama_stack/providers/remote/inference/ollama/ollama.py:359: in check_model_availability raise ValueError( E ValueError: Model 'custom-model' is not available in Ollama. Available models: llama3.1:8b, llama3.2:3b-instruct-fp16 __________________ TestModelRegistration.test_initialize_model_during_registering[-ollama] ___________________ llama_stack/providers/tests/inference/test_model_registration.py:85: in test_initialize_model_during_registering mock_load_model.assert_called_once() /opt/homebrew/Cellar/[email protected]/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/mock.py:956: in assert_called_once raise AssertionError(msg) E AssertionError: Expected 'load_model' to have been called once. Called 0 times. -------------------------------------------- Captured stderr call -------------------------------------------- W0207 11:55:26.777000 90854 .venv/lib/python3.13/site-packages/torch/distributed/elastic/multiprocessing/redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs. ========================================== short test summary info =========================================== FAILED llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_with_llama_model[-ollama] - ValueError: Model 'custom-model' is not available in Ollama. Available models: llama3.1:8b, llama3.2:3b-i... FAILED llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_initialize_model_during_registering[-ollama] - AssertionError: Expected 'load_model' to have been called once. Called 0 times. =========================== 2 failed, 3 passed, 60 deselected, 2 warnings in 1.84s =========================== ``` We only "care" about the `test_register_nonexistent_model` for this code. ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. Signed-off-by: Sébastien Han <[email protected]>
# What does this PR do? Added examples (Section 4) of using Llama Stack 0.1 distro on together and Llama 3.2 to answer questions about an image with LS Chat and Agent APIs.
# What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant)
…k#1016) # What does this PR do? This significantly shortens the test time (about 10x faster) since most of the time is spent on outputing the tokens "there are several planets in our solar system that have...". We want to have an answer quicker, especially when testing even larger models. ## Test Plan ``` LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_text_inference.py -k "test_text_chat_completion_non_streaming or test_text_chat_completion_streaming" ================================================================== test session starts =================================================================== platform linux -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 -- /home/yutang/.conda/envs/myenv/bin/python3.10 cachedir: .pytest_cache rootdir: /home/yutang/repos/llama-stack configfile: pyproject.toml plugins: anyio-4.7.0 collected 12 items / 8 deselected / 4 selected tests/client-sdk/inference/test_text_inference.py::test_text_chat_completion_non_streaming[meta-llama/Llama-3.1-8B-Instruct-Which planet do humans live on?-Earth] PASSED [ 25%] tests/client-sdk/inference/test_text_inference.py::test_text_chat_completion_non_streaming[meta-llama/Llama-3.1-8B-Instruct-Which planet has rings around it with a name starting with letter S?-Saturn] PASSED [ 50%] tests/client-sdk/inference/test_text_inference.py::test_text_chat_completion_streaming[meta-llama/Llama-3.1-8B-Instruct-What's the name of the Sun in latin?-Sol] PASSED [ 75%] tests/client-sdk/inference/test_text_inference.py::test_text_chat_completion_streaming[meta-llama/Llama-3.1-8B-Instruct-What is the name of the US captial?-Washington] PASSED [100%] ``` --------- Signed-off-by: Yuan Tang <[email protected]>
# What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Updated requirements.txt [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) --------- Co-authored-by: sarthakdeshpande <[email protected]>
…ck#1019) # What does this PR do? This PR removes the warnings when running tests for `remote::vllm` provider: ``` Detected the chat template content format to be 'openai'. You can set `--chat-template-content-format` to override this. ``` ## Test Plan All tests passed without the warning messages shown above. Signed-off-by: Yuan Tang <[email protected]>
# What does this PR do? The table on the project's PyPI page does not render check marks. This PR switches to use the unicode symbol directly that can be rendered correctly on PyPI. Before:  After:  Signed-off-by: Yuan Tang <[email protected]>
# What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Updated `uv.lock` to reflect the latest versions of `llama-models`, `llama-stack`, and `llama-stack-client` (bumped to 0.1.2). This ensures dependency consistency and avoids potential issues with outdated package references. Added `uv-sync` hook from `uv-pre-commit` repository to ensure synchronization of dependencies. Signed-off-by: Sébastien Han <[email protected]> [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) Signed-off-by: Sébastien Han <[email protected]>
# What does this PR do? Catches docs up to source with: ``` python llama_stack/scripts/distro_codegen.py ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] Manually checked ``` sphinx-autobuild docs/source build/html ``` [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant)
# What does this PR do? Catches a bug in the previous codegen which was removing newlines. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan ``` python llama_stack/scripts/distro_codegen.py ``` [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant)
# What does this PR do? I tried running the Qdrant provider and found some bugs. See llamastack#1021 for details. @terrytangyuan wrote there: > Please feel free to submit your changes in a PR. I fixed similar issues for pgvector provider. This might be an issue introduced from a refactoring. So I am submitting this PR. Closes llamastack#1021 ## Test Plan Here are the highlights for what I did to test this: References: - https://llama-stack.readthedocs.io/en/latest/getting_started/index.html - https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/rag_with_vector_db.py - https://github.com/meta-llama/llama-stack/blob/main/docs/zero_to_hero_guide/README.md#build-configure-and-run-llama-stack Install and run Qdrant server: ``` podman pull qdrant/qdrant mkdir qdrant-data podman run -p 6333:6333 -v $(pwd)/qdrant-data:/qdrant/storage qdrant/qdrant ``` Install and run Llama Stack from the venv-support PR (mainly because I didn't want to install conda): ``` brew install cmake # Should just need this once git clone https://github.com/meta-llama/llama-models.git gh repo clone cdoern/llama-stack cd llama-stack gh pr checkout 1018 # This is the checkout that introduces venv support for build/run. Otherwise you have to use conda. Eventually this wil be part of main, hopefully. uv sync --extra dev uv pip install -e . source .venv/bin/activate uv pip install qdrant_client LLAMA_STACK_DIR=$(pwd) LLAMA_MODELS_DIR=../llama-models llama stack build --template ollama --image-type venv ``` ``` edit llama_stack/templates/ollama/run.yaml ``` in that editor under: ``` vector_io: ``` add: ``` - provider_id: qdrant provider_type: remote::qdrant config: {} ``` see https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/vector_io/qdrant/config.py#L14 for config options (but I didn't need any) ``` LLAMA_STACK_DIR=$(pwd) LLAMA_MODELS_DIR=../llama-models llama stack run ollama --image-type venv \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env SAFETY_MODEL=$SAFETY_MODEL \ --env OLLAMA_URL=$OLLAMA_URL ``` Then I tested it out in a notebook. Key highlights included: ``` qdrant_provider = None for provider in client.providers.list(): if provider.api == "vector_io" and provider.provider_id == "qdrant": qdrant_provider = provider qdrant_provider assert qdrant_provider is not None, "QDrant is not a provider. You need to edit the run yaml file you use in your `llama stack run` call" vector_db_id = f"test-vector-db-{uuid.uuid4().hex}" client.vector_dbs.register( vector_db_id=vector_db_id, embedding_model="all-MiniLM-L6-v2", embedding_dimension=384, provider_id=qdrant_provider.provider_id, ) ``` Other than that, I just followed what was in https://llama-stack.readthedocs.io/en/latest/getting_started/index.html It would be good to have automated tests for this in the future, but that would be a big undertaking. Signed-off-by: Bill Murdock <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add fiddlecube as a new remote safety provider