Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
06e03be
feat(api): query_metrics, batches, changes
stainless-app[bot] Aug 22, 2025
8243d43
feat(api): some updates to query metrics
stainless-app[bot] Aug 22, 2025
3e9c39f
feat(api): fix completion response breakage perhaps?
stainless-app[bot] Aug 22, 2025
1f42156
codegen metadata
stainless-app[bot] Aug 25, 2025
9de5708
feat(api): manual updates
stainless-app[bot] Aug 26, 2025
66adbea
fix: close body before retrying
stainless-app[bot] Aug 29, 2025
a3cccf1
chore(internal): codegen related update
stainless-app[bot] Sep 3, 2025
0b95836
fix(client): fix circular dependencies and offset pagination
stainless-app[bot] Sep 5, 2025
d76c69c
fix(internal): unmarshal correctly when there are multiple discrimina…
stainless-app[bot] Sep 6, 2025
7b5d227
chore: bump minimum go version to 1.22
stainless-app[bot] Sep 20, 2025
67c0b00
chore: update more docs for 1.22
stainless-app[bot] Sep 20, 2025
15dfa47
fix: use slices.Concat instead of sometimes modifying r.Options
stainless-app[bot] Sep 20, 2025
062a46b
chore: do not install brew dependencies in ./scripts/bootstrap by def…
stainless-app[bot] Sep 20, 2025
eb137af
feat(api): manual updates
stainless-app[bot] Sep 25, 2025
ceb15f3
fix: bugfix for setting JSON keys with special characters
stainless-app[bot] Sep 26, 2025
b8635d7
feat(api): removing openai/v1
stainless-app[bot] Sep 27, 2025
222bb4e
feat(api): expires_after changes for /files
stainless-app[bot] Sep 30, 2025
9f926b2
feat(api)!: fixes to remove deprecated inference resources
stainless-app[bot] Sep 30, 2025
a3d6051
feat(api): updating post /v1/files to have correct multipart/form-data
stainless-app[bot] Sep 30, 2025
245c643
docs: update examples
stainless-app[bot] Sep 30, 2025
55b38d5
codegen metadata
stainless-app[bot] Sep 30, 2025
2060878
feat(api): SDKs for vector store file batches
stainless-app[bot] Sep 30, 2025
e5f679f
feat(api): SDKs for vector store file batches apis
stainless-app[bot] Sep 30, 2025
f12fecf
feat(api): moving { rerank, agents } to `client.alpha.`
stainless-app[bot] Sep 30, 2025
d8b42f6
fix: fix stream event model reference
stainless-app[bot] Sep 30, 2025
42bdca7
feat(api): move post_training and eval under alpha namespace
stainless-app[bot] Sep 30, 2025
c9da417
feat(api): fix file batches SDK to list_files
stainless-app[bot] Sep 30, 2025
a16eaef
feat(api)!: use input_schema instead of parameters for tools
stainless-app[bot] Oct 1, 2025
837277d
feat(api): tool api (input_schema, etc.) changes
stainless-app[bot] Oct 2, 2025
65cef22
fix(api): fix the ToolDefParam updates
stainless-app[bot] Oct 2, 2025
6c9752f
feat(api): fixes to URLs
stainless-app[bot] Oct 2, 2025
f3a9ee7
fix(api): another fix to capture correct responses.create() params
stainless-app[bot] Oct 2, 2025
16f4c17
release: 0.1.0-alpha.2
stainless-app[bot] Oct 2, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.1.0-alpha.1"
".": "0.1.0-alpha.2"
}
8 changes: 4 additions & 4 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 106
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-4f6633567c1a079df49d0cf58f37251a4bb0ee2f2a496ac83c9fee26eb325f9c.yml
openapi_spec_hash: af5b3d3bbecf48f15c90b982ccac852e
config_hash: ddcbd66d7ac80290da208232a746e30f
configured_endpoints: 108
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-b220f9f8667d2af8007134d0403b24452c20c9c512ca87d0b69b20b761272609.yml
openapi_spec_hash: cde1096a830f2081d68f858f020fd53f
config_hash: 8800bdff1a087b9d5211dda2a7b9f66f
54 changes: 54 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,59 @@
# Changelog

## 0.1.0-alpha.2 (2025-10-02)

Full Changelog: [v0.1.0-alpha.1...v0.1.0-alpha.2](https://github.com/llamastack/llama-stack-client-go/compare/v0.1.0-alpha.1...v0.1.0-alpha.2)

### ⚠ BREAKING CHANGES

* **api:** use input_schema instead of parameters for tools
* **api:** fixes to remove deprecated inference resources

### Features

* **api:** expires_after changes for /files ([222bb4e](https://github.com/llamastack/llama-stack-client-go/commit/222bb4ed27e9a4afd247aacea50d1645326a6b54))
* **api:** fix completion response breakage perhaps? ([3e9c39f](https://github.com/llamastack/llama-stack-client-go/commit/3e9c39f1bf837daf2487ec2c465a9522e6b5befe))
* **api:** fix file batches SDK to list_files ([c9da417](https://github.com/llamastack/llama-stack-client-go/commit/c9da41734d782e758971ca2f5b9821669dbd7331))
* **api:** fixes to remove deprecated inference resources ([9f926b2](https://github.com/llamastack/llama-stack-client-go/commit/9f926b25335010398c825844a60927709ccfc07f))
* **api:** fixes to URLs ([6c9752f](https://github.com/llamastack/llama-stack-client-go/commit/6c9752f49b98e57c4ce06c6310f33d19207b3ae1))
* **api:** manual updates ([eb137af](https://github.com/llamastack/llama-stack-client-go/commit/eb137afa61bf5621159b871561e3a11ca3589b00))
* **api:** manual updates ([9de5708](https://github.com/llamastack/llama-stack-client-go/commit/9de5708ab28e06c58731c3ac57c3e4c4fbc12aca))
* **api:** move post_training and eval under alpha namespace ([42bdca7](https://github.com/llamastack/llama-stack-client-go/commit/42bdca705b93c1c8c3330a4075bd8f5bde977f65))
* **api:** moving { rerank, agents } to `client.alpha.` ([f12fecf](https://github.com/llamastack/llama-stack-client-go/commit/f12fecf6d892348ebee06652255f6ac66f38e0bb))
* **api:** query_metrics, batches, changes ([06e03be](https://github.com/llamastack/llama-stack-client-go/commit/06e03be805d93736fcf4f848c5f9888e2871c911))
* **api:** removing openai/v1 ([b8635d7](https://github.com/llamastack/llama-stack-client-go/commit/b8635d7781c593fc1fb4bda7311189428f5bc128))
* **api:** SDKs for vector store file batches ([2060878](https://github.com/llamastack/llama-stack-client-go/commit/2060878c2b6b81c76b65f56dab6d699df12fb7d0))
* **api:** SDKs for vector store file batches apis ([e5f679f](https://github.com/llamastack/llama-stack-client-go/commit/e5f679f8f193fcdf76aba82288130854e9b86819))
* **api:** some updates to query metrics ([8243d43](https://github.com/llamastack/llama-stack-client-go/commit/8243d43dfa43bb9ec92d1edbed36f114240a265e))
* **api:** tool api (input_schema, etc.) changes ([837277d](https://github.com/llamastack/llama-stack-client-go/commit/837277d4e6b4ffb57ce0136072b80400011da79a))
* **api:** updating post /v1/files to have correct multipart/form-data ([a3d6051](https://github.com/llamastack/llama-stack-client-go/commit/a3d6051547ce2b9cbd5af96b6b802515b737e7fb))
* **api:** use input_schema instead of parameters for tools ([a16eaef](https://github.com/llamastack/llama-stack-client-go/commit/a16eaef870f6ec94ae6adf36eed0d65bfa9fd3b8))


### Bug Fixes

* **api:** another fix to capture correct responses.create() params ([f3a9ee7](https://github.com/llamastack/llama-stack-client-go/commit/f3a9ee7303c890444802c76412d5d245a1420bdb))
* **api:** fix the ToolDefParam updates ([65cef22](https://github.com/llamastack/llama-stack-client-go/commit/65cef2268480297f4233dd1c4c817aa03943f18e))
* bugfix for setting JSON keys with special characters ([ceb15f3](https://github.com/llamastack/llama-stack-client-go/commit/ceb15f300fdf9b7e1b2615c14c352878bcfc082b))
* **client:** fix circular dependencies and offset pagination ([0b95836](https://github.com/llamastack/llama-stack-client-go/commit/0b95836016ca0d089d3f7c07456ff5f55989011f))
* close body before retrying ([66adbea](https://github.com/llamastack/llama-stack-client-go/commit/66adbea266032b1198c76c8f590808d61a3d145a))
* fix stream event model reference ([d8b42f6](https://github.com/llamastack/llama-stack-client-go/commit/d8b42f67eefb216968989a10d68b2ff0e3e65a62))
* **internal:** unmarshal correctly when there are multiple discriminators ([d76c69c](https://github.com/llamastack/llama-stack-client-go/commit/d76c69c30d1402e13178448691d8202e6f2b5d82))
* use slices.Concat instead of sometimes modifying r.Options ([15dfa47](https://github.com/llamastack/llama-stack-client-go/commit/15dfa47636cc1cd0ccb6b089ae363a7e70a5f56c))


### Chores

* bump minimum go version to 1.22 ([7b5d227](https://github.com/llamastack/llama-stack-client-go/commit/7b5d227df87389479dc2f6954ba59147b5d1a0fc))
* do not install brew dependencies in ./scripts/bootstrap by default ([062a46b](https://github.com/llamastack/llama-stack-client-go/commit/062a46b117baaf537ef9a0edef4222d7a1b3a839))
* **internal:** codegen related update ([a3cccf1](https://github.com/llamastack/llama-stack-client-go/commit/a3cccf10d30121514bb6b07a6416c589a1881763))
* update more docs for 1.22 ([67c0b00](https://github.com/llamastack/llama-stack-client-go/commit/67c0b0067523c93560b6d6467b81e3e8c2ecb61e))


### Documentation

* update examples ([245c643](https://github.com/llamastack/llama-stack-client-go/commit/245c643bb01b573243c31bea5f66761ef7e3fba1))

## 0.1.0-alpha.1 (2025-08-21)

Full Changelog: [v0.0.1-alpha.0...v0.1.0-alpha.1](https://github.com/llamastack/llama-stack-client-go/compare/v0.0.1-alpha.0...v0.1.0-alpha.1)
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ $ ./scripts/lint

This will install all the required dependencies and build the SDK.

You can also [install go 1.18+ manually](https://go.dev/doc/install).
You can also [install go 1.22+ manually](https://go.dev/doc/install).

## Modifying/Adding code

Expand Down
60 changes: 32 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
# Llama Stack Client Go API Library

<!-- x-release-please-start-version -->

<a href="https://pkg.go.dev/github.com/llamastack/llama-stack-client-go"><img src="https://pkg.go.dev/badge/github.com/llamastack/llama-stack-client-go.svg" alt="Go Reference"></a>

<!-- x-release-please-end -->

The Llama Stack Client Go library provides convenient access to the [Llama Stack Client REST API](https://llama-stack.readthedocs.io/en/latest/)
from applications written in Go.

Expand All @@ -24,14 +28,14 @@ Or to pin the version:
<!-- x-release-please-start-version -->

```sh
go get -u 'github.com/llamastack/[email protected].1'
go get -u 'github.com/llamastack/[email protected].2'
```

<!-- x-release-please-end -->

## Requirements

This library requires Go 1.18+.
This library requires Go 1.22+.

## Usage

Expand Down Expand Up @@ -261,7 +265,7 @@ client := llamastackclient.NewClient(
option.WithHeader("X-Some-Header", "custom_header_info"),
)

client.Inference.ChatCompletion(context.TODO(), ...,
client.Chat.Completions.New(context.TODO(), ...,
// Override the header
option.WithHeader("X-Some-Header", "some_other_custom_header_info"),
// Add an undocumented field to the request body, using sjson syntax
Expand Down Expand Up @@ -292,23 +296,23 @@ When the API returns a non-success status code, we return an error with type
To handle errors, we recommend that you use the `errors.As` pattern:

```go
_, err := client.Inference.ChatCompletion(context.TODO(), llamastackclient.InferenceChatCompletionParams{
Messages: []shared.MessageUnionParam{{
OfUser: &shared.UserMessageParam{
Content: shared.InterleavedContentUnionParam{
_, err := client.Chat.Completions.New(context.TODO(), llamastackclient.ChatCompletionNewParams{
Messages: []llamastackclient.ChatCompletionNewParamsMessageUnion{{
OfUser: &llamastackclient.ChatCompletionNewParamsMessageUser{
Content: llamastackclient.ChatCompletionNewParamsMessageUserContentUnion{
OfString: llamastackclient.String("string"),
},
},
}},
ModelID: "model_id",
Model: "model",
})
if err != nil {
var apierr *llamastackclient.Error
if errors.As(err, &apierr) {
println(string(apierr.DumpRequest(true))) // Prints the serialized HTTP request
println(string(apierr.DumpResponse(true))) // Prints the serialized HTTP response
}
panic(err.Error()) // GET "/v1/inference/chat-completion": 400 Bad Request { ... }
panic(err.Error()) // GET "/v1/chat/completions": 400 Bad Request { ... }
}
```

Expand All @@ -326,17 +330,17 @@ To set a per-retry timeout, use `option.WithRequestTimeout()`.
// This sets the timeout for the request, including all the retries.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
client.Inference.ChatCompletion(
client.Chat.Completions.New(
ctx,
llamastackclient.InferenceChatCompletionParams{
Messages: []shared.MessageUnionParam{{
OfUser: &shared.UserMessageParam{
Content: shared.InterleavedContentUnionParam{
llamastackclient.ChatCompletionNewParams{
Messages: []llamastackclient.ChatCompletionNewParamsMessageUnion{{
OfUser: &llamastackclient.ChatCompletionNewParamsMessageUser{
Content: llamastackclient.ChatCompletionNewParamsMessageUserContentUnion{
OfString: llamastackclient.String("string"),
},
},
}},
ModelID: "model_id",
Model: "model",
},
// This sets the per-retry timeout
option.WithRequestTimeout(20*time.Second),
Expand Down Expand Up @@ -392,17 +396,17 @@ client := llamastackclient.NewClient(
)

// Override per-request:
client.Inference.ChatCompletion(
client.Chat.Completions.New(
context.TODO(),
llamastackclient.InferenceChatCompletionParams{
Messages: []shared.MessageUnionParam{{
OfUser: &shared.UserMessageParam{
Content: shared.InterleavedContentUnionParam{
llamastackclient.ChatCompletionNewParams{
Messages: []llamastackclient.ChatCompletionNewParamsMessageUnion{{
OfUser: &llamastackclient.ChatCompletionNewParamsMessageUser{
Content: llamastackclient.ChatCompletionNewParamsMessageUserContentUnion{
OfString: llamastackclient.String("string"),
},
},
}},
ModelID: "model_id",
Model: "model",
},
option.WithMaxRetries(5),
)
Expand All @@ -416,24 +420,24 @@ you need to examine response headers, status codes, or other details.
```go
// Create a variable to store the HTTP response
var response *http.Response
chatCompletionResponse, err := client.Inference.ChatCompletion(
completion, err := client.Chat.Completions.New(
context.TODO(),
llamastackclient.InferenceChatCompletionParams{
Messages: []shared.MessageUnionParam{{
OfUser: &shared.UserMessageParam{
Content: shared.InterleavedContentUnionParam{
llamastackclient.ChatCompletionNewParams{
Messages: []llamastackclient.ChatCompletionNewParamsMessageUnion{{
OfUser: &llamastackclient.ChatCompletionNewParamsMessageUser{
Content: llamastackclient.ChatCompletionNewParamsMessageUserContentUnion{
OfString: llamastackclient.String("string"),
},
},
}},
ModelID: "model_id",
Model: "model",
},
option.WithResponseInto(&response),
)
if err != nil {
// handle error
}
fmt.Printf("%+v\n", chatCompletionResponse)
fmt.Printf("%+v\n", completion)

fmt.Printf("Status Code: %d\n", response.StatusCode)
fmt.Printf("Headers: %+#v\n", response.Header)
Expand Down
Loading
Loading