-
-
Notifications
You must be signed in to change notification settings - Fork 9.6k
[CI/Build] Update causal-conv1d and lm-eval #22141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: DarkLight1337 <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates two key dependencies, causal-conv1d
and lm-eval
. The update to causal-conv1d
to version 1.5.2 is a positive change that simplifies the build process by allowing it to be managed through pip-compile
. However, the update to lm-eval
pins the dependency to a specific git commit hash. While this may be necessary to resolve an immediate compatibility issue, it introduces potential maintenance and security risks. I've added comments suggesting the addition of explanatory notes in the code to clarify the reason for this pinning and to facilitate future updates to a stable release.
@@ -71,7 +71,7 @@ COPY --from=build_vllm ${COMMON_WORKDIR}/vllm /vllm-workspace | |||
RUN cd /vllm-workspace \ | |||
&& rm -rf vllm \ | |||
&& python3 -m pip install -e tests/vllm_test_utils \ | |||
&& python3 -m pip install lm-eval[api]==0.4.4 \ | |||
&& python3 -m pip install git+https://github.com/EleutherAI/lm-evaluation-harness.git@bc811365ef3917a15da6aac9b7feafb4ba0fb42f#egg=lm-eval[api] \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pinning a dependency to a specific commit hash can introduce maintenance and security risks. It makes the build less reproducible if the commit history is changed (e.g., force-push) and bypasses the security vetting of official releases. If this specific commit is necessary to unblock a temporary issue, please add a comment explaining the reason for this pin and which issue/PR it relates to. This will help with future maintenance and make it clear when this can be reverted to a stable, versioned release.
mamba_ssm==2.2.5 # required for plamo2 test | ||
matplotlib # required for qwen-vl test | ||
mistral_common[image,audio] >= 1.8.2 # required for voxtral test | ||
num2words # required for smolvlm test | ||
open_clip_torch==2.32.0 # Required for nemotron_vl test | ||
opencv-python-headless >= 4.11.0 # required for video test | ||
datamodel_code_generator # required for minicpm3 test | ||
lm-eval[api]==0.4.8 # required for model evaluation test | ||
lm-eval[api] @ git+https://github.com/EleutherAI/lm-evaluation-harness.git@bc811365ef3917a15da6aac9b7feafb4ba0fb42f # required for model evaluation test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pinning a dependency to a specific commit hash, as done here for lm-eval
, can be risky for long-term maintenance and security. It makes it difficult to track updates and can lead to build instability if the commit is ever removed or the repository history is altered. It's generally better to use a versioned release. If this pin is a temporary measure to resolve a specific compatibility issue, it would be very helpful to add a comment here explaining the context, linking to the relevant issue or PR, and outlining the conditions under which this can be updated to a released version.
Signed-off-by: DarkLight1337 <[email protected]>
Hmm seems |
I'll just update |
Prefer #22409, closing |
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
causal-conv1d==1.5.2
is now compatible withuv pip compile
, so let's update it to simplify the buildlm-eval
to work with [Deprecation] Removeprompt_token_ids
arg fallback inLLM.generate
andLLM.embed
#18800Test Plan
Test Result
(Optional) Documentation Update