-
Notifications
You must be signed in to change notification settings - Fork 46
Pull requests: HabanaAI/vllm-hpu-extension
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add dynamic_quant_for_gaudi2.py script to convert model
#387
opened Oct 29, 2025 by
wenbinc-Bin
Loading…
[SW-238300] Disabling dynamic quantization in mlp module
#383
opened Oct 26, 2025 by
HolyFalafel
Loading…
pass
chunk_size and global_num_experts to the MoE kernel
#369
opened Sep 19, 2025 by
yangulei
Loading…
Allow usage of fused_block_softmax_adjustment for Qwen with Lazy
#246
opened Jun 27, 2025 by
mswiniarsk
•
Draft
Previous Next
ProTip!
What’s not been updated in a month: updated:<2025-09-30.