forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 9
Pull requests: tetherto/qvac-ext-lib-llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
WIP: llama: Vulkan: Fix Adreno Q8_0 issues.
examples
ggml
Nvidia GPU
testing
Vulkan
#11
opened Aug 29, 2025 by
infinitalo
Loading…
Add initial LoRA finetuning support; vulkan OUT_PROD; vulkan cross-entropy-backward
examples
ggml
Nvidia GPU
Vulkan
#5
opened Aug 19, 2025 by
makaveli10
Loading…
ProTip!
Adding no:label will show everything without a label.