Skip to content

Pull requests: Blaizzy/mlx-vlm

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

Add KV cache (quantization) parameters to server.
#776 opened Feb 26, 2026 by viktike Loading…
Update LORA.md for the new trainer
#775 opened Feb 26, 2026 by Goekdeniz-Guelmez Loading…
feat(minicpmo): add minicpmo backend support
#774 opened Feb 26, 2026 by VansonLeung Loading…
[WIP] Tool calling in server
#773 opened Feb 25, 2026 by viktike Loading…
CORS middleware
#766 opened Feb 25, 2026 by viktike Loading…
Streaming chatml response enhancements.
#764 opened Feb 25, 2026 by viktike Loading…
Add distributed infer for qwen3_vl_moe
#730 opened Feb 13, 2026 by Blaizzy Loading…
Distributed inference for Kimi K2.5
#689 opened Jan 27, 2026 by pcuenca Loading…
Implement Joycaption as a custom Llava model
#659 opened Jan 16, 2026 by nArn0 Loading…
Remove mask parameter
#595 opened Nov 20, 2025 by Blaizzy Draft
[WIP] Reduce deps core
#593 opened Nov 19, 2025 by Blaizzy Draft
[WIP] Token filtering + merging
#185 opened Jan 20, 2025 by Blaizzy Loading…
ProTip! Type g i on any issue or pull request to go back to the issue listing page.