Skip to content

[worker] feat: QAT with FP8 (w8a8 & w8a16)#6229

Draft
HollowMan6 wants to merge 2 commits intoverl-project:mainfrom
HollowMan6:fp8_qat
Draft

[worker] feat: QAT with FP8 (w8a8 & w8a16)#6229
HollowMan6 wants to merge 2 commits intoverl-project:mainfrom
HollowMan6:fp8_qat

Conversation

@HollowMan6
Copy link
Copy Markdown
Collaborator

What does this PR do?

Support w8a8 & w8a16 for QAT

Need NVIDIA-NeMo/Megatron-Bridge#3612

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Signed-off-by: Hollow Man <hollowman@opensuse.org>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces FP8 Quantization-Aware Training (QAT) support for FSDP and Megatron backends, complementing the existing NVFP4 functionality. Key updates include the implementation of FP8 blockwise fake quantization kernels, the addition of w8a8 and w8a16 modes to the QATLinear module, and logic for exporting serialized FP8 weights and scales for vLLM rollout. The changes also incorporate configuration for 2D weight block sizes, ensure LoRA adapter weights remain unquantized, and provide compatibility updates for vLLM 0.19. Feedback highlights a redundant copy argument in a tensor conversion and a fragile string-based module path check in the Megatron patching logic.

Comment thread verl/utils/kernel/fp8_kernel.py Outdated
Comment thread verl/utils/modelopt/megatron_qat_patch.py Outdated
Signed-off-by: Hollow Man <hollowman@opensuse.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant