forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 49
[NOT FOR LANDING] 355_wip_0909_rc2 -> 0909_rc2 #654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
maleksan85
wants to merge
974
commits into
0909_rc2
Choose a base branch
from
355_wip_0909_rc2
base: 0909_rc2
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+3,894
−433
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: vllmellm <[email protected]>
…stream_merge_2025_04_02
Upstream merge 2025 04 02
* Adding 2stage MoE support separately until it is added upstream * Missing layour param
* Enable fused fp8 out in V1 CPA and FA * Correct operation and creating the tensot or th correct type * Update to use for the non-custom path as well * This was a debug assert
Upstream merge 2025 04 07
* Added the extra use_irope parameter in Co-authored-by: Hongxia Yang <[email protected]> Signed-off-by: tjtanaa <[email protected]> * Fix ROCm V1 Engine Fused MoE Bug Signed-off-by: tjtanaa <[email protected]> * Add warning message that V0 do not support irope Signed-off-by: tjtanaa <[email protected]> --------- Signed-off-by: tjtanaa <[email protected]> Co-authored-by: tjtanaa <[email protected]> Co-authored-by: Hongxia Yang <[email protected]>
add RAY_EXPERIMENTAL_NOSET_ROCR_VISIBLE_DEVICES=1
Signed-off-by: maleksan85 <[email protected]> Co-authored-by: maleksan85 <[email protected]>
Signed-off-by: charlifu <[email protected]>
Signed-off-by: jpvillam <[email protected]>
Signed-off-by: jpvillam <[email protected]>
* Updated README.md with April 10 results * Updated README.md with "2-stage MoE and MLA from AITER"
Added correct path for Dockerfile.rocm under Docker manifest
Signed-off-by: charlifu <[email protected]>
The upstream has moved docker files into a separate directory. Signed-off-by: Alexei V. Ivanov <[email protected]>
* Update test-template.j2 to fix new location of run-amd-test.sh Update test-template.j2 to fix new location of run-amd-test.sh Signed-off-by: Alexei V. Ivanov <[email protected]> * Update test-template.j2 upsie, wrong path initially Update test-template.j2 upsie, wrong path initially Signed-off-by: Alexei V. Ivanov <[email protected]> * Update test-template.j2 Signed-off-by: Alexei V. Ivanov <[email protected]> --------- Signed-off-by: Alexei V. Ivanov <[email protected]>
* Adapted hipblaslt build to work with rocm 6.4 * rccl version compatible with 6.4 * Torch and triton combination that works * hipblaslt version and not rebuilding rccl * Fixing another package that we install now
Added Known Issues section to document meta 405B FP 8 model mem fault and work around.
Signed-off-by: seungrokjung <[email protected]>
* Updated README.md for June 10 release * Added Docker Manifest git hash
* Updated README.md for June 24 Docker release * Added additional throughput results * Fixed some throughput results
* Minor changes to command line examples * README changes and added throughput results Still waiting on latency * Added latency results * Update README.md * Update README.md
* Update test-pipeline.yaml Disabling the "Tensorizer Test". The test is seen to generate exceptions while still reporting as successful. That needs to be verified before re-enabling the test in the production environment. Signed-off-by: Alexei V. Ivanov <[email protected]> * Fixing pre-commit complaints. Signed-off-by: Alexei V. Ivanov <[email protected]> * . Signed-off-by: Alexei V. Ivanov <[email protected]> --------- Signed-off-by: Alexei V. Ivanov <[email protected]>
maleksan85
commented
Sep 5, 2025
@@ -414,6 +429,7 @@ def __init__( | |||
attn_type: AttentionType = AttentionType.DECODER, | |||
kv_sharing_target_layer_name: Optional[int] = None, | |||
sinks: Optional[torch.Tensor] = None, | |||
sinks: Optional[torch.Tensor] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix it!
Signed-off-by: Matthew Wong <[email protected]>
support ck-tile blockquant gemm in vllm
…ng on AMD (vllm-project#23864) Signed-off-by: Jinghui Zhang <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Please direct your PRs to the upstream vllm (https://github.com/vllm-project/vllm.git)
Accepting PRs into the ROCm fork (https://github.com/ROCm/vllm) will require a clear previously communicated exception