v0.6.0 - follow major llama.cpp changes
What's Changed
- Better Antiprompt Testing by @martindevans in #150
- Simplified
LLamaInteractExecutor
antiprompt matching by @martindevans in #152 - Changed
OpenOrCreate
toCreate
by @martindevans in #153 - Beam Search by @martindevans in #155
- ILogger implementation by @saddam213 in #158
- Removed
GenerateResult
by @martindevans in #159 GetState()
fix by @martindevans in #160- llama_get_kv_cache_token_count by @martindevans in #164
- better_instruct_antiprompt_checking by @martindevans in #165
- skip_empty_tokenization by @martindevans in #167
- SemanticKernel API Update by @drasticactions in #169
- Removed unused properties of
InferenceParams
&ModelParams
by @martindevans in #149 - Coding assistent example by @Regenhardt in #172
- Remove non-async by @martindevans in #173
- MacOS default build now is metal llama.cpp #2901 by @SignalRT in #163
- CPU Feature Detection by @martindevans in #65
- make InferenceParams a record so we can use
with
by @redthing1 in #175 - fix opaque GetState (fixes #176) by @redthing1 in #177
- Extensions Method Unit Tests by @martindevans in #179
- Async Stateless Executor by @martindevans in #182
- Fixed GitHub Action by @martindevans in #190
- GrammarRule Tests by @martindevans in #192
- More Tests by @martindevans in #194
- Support SemanticKernel 1.0.0-beta1 by @DVaughan in #193
- Major llama.cpp API Change by @martindevans in #185
- Cleanup by @martindevans in #196
- Update WebUI inline with v5.0.x by @saddam213 in #197
- More Logging by @martindevans in #198
- chore: Update LLama.Examples and LLama.SemanticKernel by @xbotter in #201
- ci: add auto release workflow. by @AsakusaRinne in #204
New Contributors
- @Regenhardt made their first contribution in #172
- @redthing1 made their first contribution in #175
- @DVaughan made their first contribution in #193
Full Changelog: v0.5.1...v0.6.0