Skip to content

Voxtral Realtime: enable bf16 for Metal backend with quantization#17845

Draft
mergennachin wants to merge 1 commit intomainfrom
bf16_voxtral_metal
Draft

Voxtral Realtime: enable bf16 for Metal backend with quantization#17845
mergennachin wants to merge 1 commit intomainfrom
bf16_voxtral_metal

Conversation

@mergennachin
Copy link
Contributor

The Metal AOTI backend already handles bf16 correctly (fp32 attention
masks, fp32 RoPE upcast, dtype-agnostic KV caches and SDPA). Enable
--dtype bf16 as the default recipe for Metal CI and update all
documentation to recommend bf16 with fpa4w quantization.

@mergennachin mergennachin requested a review from lucylq as a code owner March 4, 2026 14:28
Copilot AI review requested due to automatic review settings March 4, 2026 14:29
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 4, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17845

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Cancelled Job, 4 Unrelated Failures

As of commit 52027ff with merge base 6db7f4c (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 4, 2026
@github-actions
Copy link

github-actions bot commented Mar 4, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Enables and recommends bf16 for Voxtral Realtime exports on Metal when using quantization, updating CI export arguments and user-facing docs to reflect the preferred configuration for memory/throughput.

Changes:

  • Update Voxtral Realtime docs to include bf16 memory footprint numbers and recommend --dtype bf16 for Metal quantized exports.
  • Adjust example Metal export command(s) to include --dtype bf16 alongside fpa4w.
  • Update Metal CI export script to pass --dtype bf16 for the quantized-int4-metal configuration.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
examples/models/voxtral_realtime/model.md Updates memory calculations and guidance around bf16 + quantization for Metal/CUDA.
examples/models/voxtral_realtime/export_voxtral_rt.py Updates usage example to show Metal export with bf16 + fpa4w.
examples/models/voxtral_realtime/README.md Updates Metal backend table and export examples to recommend bf16 with fpa4w.
.ci/scripts/export_model_artifact.sh Ensures Metal int4 quantized CI export passes --dtype bf16.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@mergennachin mergennachin marked this pull request as draft March 4, 2026 14:37
The Metal AOTI backend already handles bf16 correctly (fp32 attention
masks, fp32 RoPE upcast, dtype-agnostic KV caches and SDPA). Enable
--dtype bf16 as the default recipe for Metal CI and update all
documentation to recommend bf16 with fpa4w quantization.

Fix a Metal shader compilation bug in the streaming encoder where
bool.to(bf16) generates `bfloat tmp = 0.0;` — Metal Shading Language
doesn't support implicit float-to-bfloat literal conversion. Use
.float() instead and let mul_ handle type promotion.
@mergennachin mergennachin temporarily deployed to upload-benchmark-results March 4, 2026 15:40 — with GitHub Actions Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants