-
Notifications
You must be signed in to change notification settings - Fork 1.2k
chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage #3061
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage #3061
Conversation
9a3d238
to
ef9bcef
Compare
f59fb26
to
6d0174f
Compare
6d0174f
to
348cf69
Compare
Thanks for the update! Can you please check the section called "Unchanged files with check annotations" at the end of this PR's files list? It appears that some violations still occur solely due to the |
thanks so much for your review 🙏🏽 I have already checked actually :), the import is needed for this func
i tried to replace adding the comment though 👍🏽 |
348cf69
to
47979f2
Compare
fixed 👍🏽 |
I understand. But please can you add to all the imports which require direct import of I've mentioned it already in #3061 (comment) : For any usage when you need that import you have to add the comment to allow it. |
thanks for your review :) It seems that the view mentioned on #3061 (comment), is only available for reviewers, as I have been through the PR (files and checks) many time and can not see the same warning :) But I should have checked all |
47979f2
to
8270c7d
Compare
acd4080
to
206659b
Compare
206659b
to
3b31fc7
Compare
llama_stack/providers/inline/agents/meta_reference/openai_responses.py
Outdated
Show resolved
Hide resolved
llama_stack/providers/inline/agents/meta_reference/openai_responses.py
Outdated
Show resolved
Hide resolved
3b31fc7
to
8418e71
Compare
b310df8
to
6afb780
Compare
@mattf hello ✋🏽 I dont know why it did not merge, as it was approved yesterday !! |
052e599
to
2fb7608
Compare
rebased 👍🏽 |
2fb7608
to
ba4bb2f
Compare
…usage Signed-off-by: Mustafa Elbehery <[email protected]>
ba4bb2f
to
8d85f8a
Compare
@Elbehery merge in main so we can get pre-commit passing |
sorry just to be sure, so i do not correct ? |
thanks so much @mattf 🙏🏽 can we merge this now ? |
Signed-off-by: Francisco Javier Arceo <[email protected]> chore: Enable keyword search for Milvus inline (llamastack#3073) With milvus-io/milvus-lite#294 - Milvus Lite supports keyword search using BM25. While introducing keyword search we had explicitly disabled it for inline milvus. This PR removes the need for the check, and enables `inline::milvus` for tests. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> Run llama stack with `inline::milvus` enabled: ``` pytest tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes --stack-config=http://localhost:8321 --embedding-model=all-MiniLM-L6-v2 -v ``` ``` INFO 2025-08-07 17:06:20,932 tests.integration.conftest:64 tests: Setting DISABLE_CODE_SANDBOX=1 for macOS =========================================================================================== test session starts ============================================================================================ platform darwin -- Python 3.12.11, pytest-7.4.4, pluggy-1.5.0 -- /Users/vnarsing/miniconda3/envs/stack-client/bin/python cachedir: .pytest_cache metadata: {'Python': '3.12.11', 'Platform': 'macOS-14.7.6-arm64-arm-64bit', 'Packages': {'pytest': '7.4.4', 'pluggy': '1.5.0'}, 'Plugins': {'asyncio': '0.23.8', 'cov': '6.0.0', 'timeout': '2.2.0', 'socket': '0.7.0', 'html': '3.1.1', 'langsmith': '0.3.39', 'anyio': '4.8.0', 'metadata': '3.0.0'}} rootdir: /Users/vnarsing/go/src/github/meta-llama/llama-stack configfile: pyproject.toml plugins: asyncio-0.23.8, cov-6.0.0, timeout-2.2.0, socket-0.7.0, html-3.1.1, langsmith-0.3.39, anyio-4.8.0, metadata-3.0.0 asyncio: mode=Mode.AUTO collected 3 items tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes[None-None-all-MiniLM-L6-v2-None-384-vector] PASSED [ 33%] tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes[None-None-all-MiniLM-L6-v2-None-384-keyword] PASSED [ 66%] tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes[None-None-all-MiniLM-L6-v2-None-384-hybrid] PASSED [100%] ============================================================================================ 3 passed in 4.75s ============================================================================================= ``` Signed-off-by: Varsha Prasad Narsing <[email protected]> Co-authored-by: Francisco Arceo <[email protected]> chore: Fixup main pre commit (llamastack#3204) build: Bump version to 0.2.18 chore: Faster npm pre-commit (llamastack#3206) Adds npm to pre-commit.yml installation and caches ui Removes node installation during pre-commit. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* --> Signed-off-by: Francisco Javier Arceo <[email protected]> chiecking in for tonight, wip moving to agents api Signed-off-by: Francisco Javier Arceo <[email protected]> remove log Signed-off-by: Francisco Javier Arceo <[email protected]> updated Signed-off-by: Francisco Javier Arceo <[email protected]> fix: disable ui-prettier & ui-eslint (llamastack#3207) chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (llamastack#3061) This PR adds a step in pre-commit to enforce using `llama_stack` logger. Currently, various parts of the code base uses different loggers. As a custom `llama_stack` logger exist and used in the codebase, it is better to standardize its utilization. Signed-off-by: Mustafa Elbehery <[email protected]> Co-authored-by: Matthew Farrellee <[email protected]> fix: fix ```openai_embeddings``` for asymmetric embedding NIMs (llamastack#3205) NVIDIA asymmetric embedding models (e.g., `nvidia/llama-3.2-nv-embedqa-1b-v2`) require an `input_type` parameter not present in the standard OpenAI embeddings API. This PR adds the `input_type="query"` as default and updates the documentation to suggest using the `embedding` API for passage embeddings. <!-- If resolving an issue, uncomment and update the line below --> Resolves llamastack#2892 ``` pytest -s -v tests/integration/inference/test_openai_embeddings.py --stack-config="inference=nvidia" --embedding-model="nvidia/llama-3.2-nv-embedqa-1b-v2" --env NVIDIA_API_KEY={nvidia_api_key} --env NVIDIA_BASE_URL="https://integrate.api.nvidia.com" ``` cleaning up Signed-off-by: Francisco Javier Arceo <[email protected]> updating session manager to cache messages locally Signed-off-by: Francisco Javier Arceo <[email protected]> fix linter Signed-off-by: Francisco Javier Arceo <[email protected]> more cleanup Signed-off-by: Francisco Javier Arceo <[email protected]>
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> This PR renames categories of llama_stack loggers. This PR aligns logging categories as per the package name, as well as reviews from initial #2868. This is a follow up to #3061. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> Replaces #2868 Part of #2865 cc @leseb @rhuss Signed-off-by: Mustafa Elbehery <[email protected]>
) # What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> This PR renames categories of llama_stack loggers. This PR aligns logging categories as per the package name, as well as reviews from initial llamastack#2868. This is a follow up to llamastack#3061. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> Replaces llamastack#2868 Part of llamastack#2865 cc @leseb @rhuss Signed-off-by: Mustafa Elbehery <[email protected]>
What does this PR do?
This PR adds a step in pre-commit to enforce using
llama_stack
logger.Currently, various parts of the code base uses different loggers. As a custom
llama_stack
logger exist and used in the codebase, it is better to standardize its utilization.This PR contains two commits
llama_stack
custom logger is being used, otherwise it fails.llama_stack
logger.Replaces #2868
Part of #2865
cc @leseb @rhuss