Releases: AmpereComputingAI/llama.cpp
Releases · AmpereComputingAI/llama.cpp
v3.3.1
v3.3.0
v3.2.1
v3.2.0
v3.1.2
v3.1.0
v2.2.1
Update benchmark.py
v2.0.0
- Upgraded upstream tag enables Llama 3.1 in ollama
- Support for AmpereOne platform
- Breaking change: due to changed weight type IDs it is now required to re-quantize models to Q8R16 and Q4_K_4 formats with current llama-quantize tool.
v1.2.6
Create README.md
v1.2.3
- The rebase is to allow llama-cpp-python to pick up upstream CVE fix (GHSA-56xg-wfcc-g829)
- Experimental support for Q8R16 quantized format with optimized matrix multiplication kernels
- CMake files updated to build llama.aio on AmpereOne