Add TurboQuant KV cache compression for prefix cache (4.6x)#233
Add TurboQuant KV cache compression for prefix cache (4.6x)#233arozanov wants to merge 4 commits intowaybarrios:mainfrom
Conversation
b048558 to
2bba367
Compare
There was a problem hiding this comment.
Code Review
Clean, well-structured code that follows existing patterns (_quantize_cache / _dequantize_cache). The needs_dequantize property is an elegant abstraction. A few concerns:
🔴 Breaking change: is_trimmable() (blocking)
The has_non_trimmable check changed from duck-typing (hasattr(lc, "offset") and hasattr(lc, "keys")) to hasattr(lc, "is_trimmable") and lc.is_trimmable(). Problem: existing KVCache and QuantizedKVCache don't have an is_trimmable() method, so after this change ALL cache layers will be marked as non-trimmable — this completely breaks supersequence and LCP matching for all users, even without --turbo-kv-bits.
Suggestion — keep the old duck-typing as fallback:
has_non_trimmable = any(
not (
(hasattr(lc, "is_trimmable") and lc.is_trimmable())
or (hasattr(lc, "offset") and hasattr(lc, "keys"))
)
for lc in cache
)🟡 Private API access in dequantization
_dequantize_cache() accesses many private attributes of TurboQuantKVCache (_k_q, _v_q, _k_dim, _v_dim, _dtype, _full_dequant(), _ensure_quantizer()). This is fragile — private API can change without notice. Does mlx-lm PR #1067 expose a public .dequantize() or .to_kvcache() method? If not, it would be worth proposing one upstream.
🟡 Shallow copy risk in _trim_cache_offset
tc.__dict__.update(layer_cache.__dict__) shares references to all internal objects. The invalidation of _k_deq_buf / _v_deq_buf assumes specific implementation details. If TurboQuantKVCache adds more cache buffers later, they'll be stale. Does the upstream class provide a copy() or trim() method?
🟡 estimate_kv_cache_memory — .state property
Accessing .state may trigger lazy evaluation if it returns dequantized tensors. Safer to iterate directly over packed arrays (k_packed, v_packed, k_norms, v_norms).
ℹ️ Upstream dependency
PR depends on ml-explore/mlx-lm#1067 which is not merged yet. Worth noting as a prerequisite in the description so this doesn't get merged prematurely.
Thanks for the thorough review! Fixed:
For the private API and shallow copy concerns: added public dequantize() and copy() methods to TurboQuantKVCache upstream in mlx-lm #1067. Will update this PR to use them once that's merged. |
|
Thanks for the quick fixes! The Happy to hear about the public |
|
@waybarrios, @arozanov: brief positive note. Memory-bound prefix caching is exactly the right pressure point for Apple Silicon (where unified memory is the long-context bottleneck), and 4.6x compression on prefix entries is meaningful relative to the existing ~2x from Two questions for completeness, not blocking:
Mergeable on current main. |
waybarrios
left a comment
There was a problem hiding this comment.
Code review
Found 4 issues:
1. PR needs rebase — _trim_cache_offset and _dequantize_cache were rewritten on main
After this PR branched, the _QuantizedCacheWrapper refactor landed on main (commit 6f0efc2), which completely rewrote both _trim_cache_offset and _dequantize_cache. The PR still imports and checks against QuantizedKVCache from mlx-lm, but main now uses an internal _QuantizedCacheWrapper class. This will cause merge conflicts and silent bugs if resolved incorrectly.
What the PR expects:
from mlx_lm.models.cache import QuantizedKVCache
# ...
if QuantizedKVCache is not None and isinstance(layer_cache, QuantizedKVCache):What main now has:
if isinstance(layer_cache, _QuantizedCacheWrapper):
# completely different structure with orig_type/orig_attrsCI also confirms black --check fails on memory_cache.py. A full rebase against current main is needed.
vllm-mlx/vllm_mlx/memory_cache.py
Lines 271 to 320 in 9e909a8
2. Shallow copy in _trim_cache_offset shares mutable quantizer state with stored cache
The TurboQuantKVCache branch uses __dict__.update which shallow-copies all references. The mutable _k_q/_v_q quantizer objects end up shared between the trimmed copy and the original stored entry:
# This creates shared references to _k_q, _v_q (mutable quantizer objects)
tc = TurboQuantKVCache.__new__(TurboQuantKVCache)
tc.__dict__.update(layer_cache.__dict__) # shallow copy
tc.offset = max(layer_cache.offset - trim_by, 0)
tc._k_deq_buf = None # only buffers are reset
tc._v_deq_buf = None
# but _k_q and _v_q are NOT copied — they're shared with the originalLater, _dequantize_cache calls layer._ensure_quantizer(...) which mutates quantizer state in-place. Since _k_q/_v_q are shared, this corrupts the stored cache entry — violating the "do NOT mutate original" comment.
Fix: either deep-copy the quantizer objects, or use the upstream copy() method once mlx-lm#1067 lands:
# Option A: deep copy quantizers
import copy
tc._k_q = copy.deepcopy(layer_cache._k_q) if layer_cache._k_q is not None else None
tc._v_q = copy.deepcopy(layer_cache._v_q) if layer_cache._v_q is not None else None
# Option B (preferred): use upstream public API
tc = layer_cache.copy()
tc.offset = max(layer_cache.offset - trim_by, 0)vllm-mlx/vllm_mlx/memory_cache.py
Lines 309 to 320 in 9e909a8
3. _dequantize_cache accesses 10+ private attributes — should use public API
The dequantization path reaches deep into TurboQuantKVCache internals (_k_q, _v_q, _k_dim, _v_dim, _dtype, _full_dequant(), _ensure_quantizer(), etc.). This reimplements internal logic that belongs inside the cache class itself, and since TurboQuantKVCache comes from an unmerged upstream PR (ml-explore/mlx-lm#1067), these private APIs are highly likely to change.
Current approach:
# 10+ private attribute accesses
if layer._k_q is None:
layer._ensure_quantizer(layer._k_dim, layer._v_dim)
B, H = layer.k_packed.shape[:2]
dtype = layer._dtype if layer._dtype is not None else mx.float16
k_all = layer._full_dequant(
layer.k_packed, layer.k_norms, layer._k_q,
layer._k_dim, B, H, layer.offset, dtype,
)The upstream PR already exposes dequantize() and copy() public methods. This should be:
elif TurboQuantKVCache is not None and isinstance(layer, TurboQuantKVCache) and not layer.empty():
result.append(layer.dequantize())vllm-mlx/vllm_mlx/memory_cache.py
Lines 445 to 462 in 9e909a8
4. RotatingKVCache metadata lost on dequantize — regression for sliding-window models
_turbo_quantize_cache only handles plain KVCache. On dequantize, it always reconstructs a plain KVCache:
# _turbo_quantize_cache — only matches plain KVCache
if isinstance(layer, KVCache) and layer.keys is not None:
compressed.append(layer.to_turbo_quantized(bits=bits))
# _dequantize_cache — always creates plain KVCache, losing RotatingKVCache metadata
kv = KVCache() # step, max_size, _idx are gone
kv.update_and_fetch(k_all, v_all)This re-introduces the bug fixed by the _QuantizedCacheWrapper refactor, which preserves orig_type/orig_attrs to correctly reconstruct RotatingKVCache for sliding-window models (Gemma 4, etc.). The TurboQuant path needs the same preservation pattern:
# Should preserve the original cache type, similar to _QuantizedCacheWrapper
orig_type = type(layer) # could be RotatingKVCache
orig_attrs = {k: getattr(layer, k) for k in ("step", "max_size", "_idx") if hasattr(layer, k)}
# ... then reconstruct with orig_type and orig_attrs on dequantizevllm-mlx/vllm_mlx/memory_cache.py
Lines 406 to 420 in 9e909a8
TL;DR: The PR needs a rebase against current main (the _QuantizedCacheWrapper refactor changed the code this PR modifies). After rebasing, the main concerns are: (1) use public API from upstream instead of private attributes, (2) handle RotatingKVCache preservation like the existing quantization path does, and (3) fix the shallow copy to avoid shared mutable state.
Adds --turbo-kv-bits flag (1-4) to compress stored prefix cache entries using TurboQuant (arXiv 2504.19874). 3-bit gives 4.6x compression vs FP16, compared to ~2x from the existing 8-bit quantization. Integration points: - memory_cache.py: _turbo_quantize_cache/_dequantize_cache, memory estimation, trim support, needs_dequantize property, config validation - scheduler.py: turbo_kv_bits in SchedulerConfig, propagation to MemoryCacheConfig - cli.py: --turbo-kv-bits for serve and bench commands Requires mlx-lm with TurboQuant support (ml-explore/mlx-lm#1067).
9e909a8 to
871a78c
Compare
|
I rebased this PR onto current The follow-up changes address the review points directly:
Validation I ran after the rebase:
I also updated the stale quantization assertions in |
|
Thanks @waybarrios for the detailed review and @Thump604 for the rebase and fixes. Not in progress, ready for final review. All four issues from the review are addressed in the latest push. Waiting on ml-explore/mlx-lm#1067 upstream before this can land. |
Thump604
left a comment
There was a problem hiding this comment.
The rebase fixed the earlier structural issues, but I still see one merge blocker around the upstream dependency boundary.
Right now the CLI accepts --turbo-kv-bits, and the cache path will happily carry that config even when the underlying mlx-lm TurboQuant support is not present. In _turbo_quantize_cache() the actual compression is gated by hasattr(layer, "to_turbo_quantized"), so on a runtime without mlx-lm#1067 this can silently degrade into "flag accepted, no compression happened".
That is a bad failure mode for a user-facing memory/compression flag. I think the PR needs one of these before merge:
- fail fast at startup / config validation when
--turbo-kv-bitsis set but TurboQuant support is unavailable, or - gate the CLI flag itself behind detected TurboQuant capability.
Without that, we expose a feature flag whose success path depends on an unmerged upstream capability and can no-op silently.
Summary
Adds
--turbo-kv-bitsoption (1-4) to compress prefix cache entries using TurboQuant (PolarQuant: randomized Hadamard rotation + Lloyd-Max codebook quantization). At 3-bit, this gives 4.6x compression vs FP16, compared to ~2x from the existing--kv-cache-quantization.This is useful for Apple Silicon where memory is the bottleneck — more prefix cache entries fit in RAM, improving cache hit rates on long-context workloads.
Usage
Replaces
--kv-cache-quantizationwhen set. Falls back to standard quantization if TurboQuant is not available.Changes
memory_cache.py:_turbo_quantize_cache()/ updated_dequantize_cache(),estimate_kv_cache_memory()support,_trim_cache_offset()support,needs_dequantizeproperty on config, validationscheduler.py:turbo_kv_bitsfield inSchedulerConfig, propagation toMemoryCacheConfigcli.py:--turbo-kv-bitsargument forserveandbenchcommandsDependency
Requires mlx-lm with TurboQuant KV cache support: ml-explore/mlx-lm#1067
Test plan
from_statedeserialization → dequantize (auto-init quantizer)estimate_kv_cache_memoryreturns correct bytes for TurboQuant entries_trim_cache_offsetcreates shallow copy (does not mutate stored entry)has_non_trimmablecorrectly identifies TurboQuant as trimmableneeds_dequantizeproperty gates all fetch paths--turbo-kv-bitspropagates to SchedulerConfig → MemoryCacheConfig