Skip to content

Commit 5d1a68f

Browse files
release: 0.2.18-alpha.3 (#256)
Automated Release PR --- ## 0.2.18-alpha.3 (2025-08-14) Full Changelog: [v0.2.18-alpha.2...v0.2.18-alpha.3](v0.2.18-alpha.2...v0.2.18-alpha.3) ### Features * `llama-stack-client providers inspect PROVIDER_ID` ([#181](#181)) ([6d18aae](6d18aae)) * add client-side utility for getting OAuth tokens simply ([#230](#230)) ([91156dc](91156dc)) * add client.chat.completions.create() and client.completions.create() ([#226](#226)) ([ee0e65e](ee0e65e)) * Add llama-stack-client datasets unregister command ([#222](#222)) ([38cd91c](38cd91c)) * add support for chat sessions ([#167](#167)) ([ce3b30f](ce3b30f)) * add type hints to event logger util ([#140](#140)) ([26f3c33](26f3c33)) * add updated batch inference types ([#220](#220)) ([ddb93ca](ddb93ca)) * add weighted_average aggregation function support ([#208](#208)) ([b62ac6c](b62ac6c)) * **agent:** support multiple tool calls ([#192](#192)) ([43ea2f6](43ea2f6)) * **agent:** support plain function as client_tool ([#187](#187)) ([2ec8044](2ec8044)) * **api:** update via SDK Studio ([48fd19c](48fd19c)) * async agent wrapper ([#169](#169)) ([fc9907c](fc9907c)) * autogen llama-stack-client CLI reference doc ([#190](#190)) ([e7b19a5](e7b19a5)) * client.responses.create() and client.responses.retrieve() ([#227](#227)) ([fba5102](fba5102)) * datasets api updates ([#203](#203)) ([b664564](b664564)) * enable_persist: sync updates from stainless branch: yanxi0830/dev ([#145](#145)) ([59a02f0](59a02f0)) * new Agent API ([#178](#178)) ([c2f73b1](c2f73b1)) * support client tool output metadata ([#180](#180)) ([8e4fd56](8e4fd56)) * Sync updates from stainless branch: ehhuang/dev ([#149](#149)) ([367da69](367da69)) * unify max infer iters with server/client tools ([#173](#173)) ([548f2de](548f2de)) * update react with new agent api ([#189](#189)) ([ac9d1e2](ac9d1e2)) ### Bug Fixes * `llama-stack-client provider inspect` should use retrieve ([#202](#202)) ([e33b5bf](e33b5bf)) * accept extra_headers in agent.create_turn and pass them faithfully ([#228](#228)) ([e72d9e8](e72d9e8)) * added uv.lock ([546e0df](546e0df)) * **agent:** better error handling ([#207](#207)) ([5746f91](5746f91)) * **agent:** initialize toolgroups/client_tools ([#186](#186)) ([458e207](458e207)) * broken .retrieve call using `identifier=` ([#135](#135)) ([626805a](626805a)) * bump to 0.2.1 ([edb6173](edb6173)) * bump version ([b6d45b8](b6d45b8)) * bump version in another place ([7253433](7253433)) * **cli:** align cli toolgroups register to the new arguments ([#231](#231)) ([a87b6f7](a87b6f7)) * correct toolgroups_id parameter name on unregister call ([#235](#235)) ([1be7904](1be7904)) * fix duplicate model get help text ([#188](#188)) ([4bab07a](4bab07a)) * llama-stack-client providers list ([#134](#134)) ([930138a](930138a)) * react agent ([#200](#200)) ([b779979](b779979)) * React Agent for non-llama models ([#174](#174)) ([ee5dd2b](ee5dd2b)) * React agent should be able to work with provided config ([#146](#146)) ([08ab5df](08ab5df)) * react agent with custom tool parser n_iters ([#184](#184)) ([aaff961](aaff961)) * remove the alpha suffix in run_benchmark.py ([#179](#179)) ([638f7f2](638f7f2)) * update CONTRIBUTING.md to point to uv instead of rye ([3fbe0cd](3fbe0cd)) * update uv lock ([cc072c8](cc072c8)) * validate endpoint url ([#196](#196)) ([6fa8095](6fa8095)) ### Chores * api sync, deprecate allow_resume_turn + rename task_config-&gt;benchmark_config (Sync updates from stainless branch: yanxi0830/dev) ([#176](#176)) ([96749af](96749af)) * AsyncAgent should use ToolResponse instead of ToolResponseMessage ([#197](#197)) ([6191aa5](6191aa5)) * **copy:** Copy changes over from llamastack/ org repository ([#255](#255)) ([7ade969](7ade969)) * deprecate eval task (Sync updates from stainless branch: main) ([#150](#150)) ([39b1248](39b1248)) * remove litellm type conversion ([#193](#193)) ([ab3f844](ab3f844)) * sync repo ([099bfc6](099bfc6)) * Sync updates from stainless branch: ehhuang/dev ([#182](#182)) ([e33aa4a](e33aa4a)) * Sync updates from stainless branch: ehhuang/dev ([#199](#199)) ([fa73d7d](fa73d7d)) * Sync updates from stainless branch: main ([#201](#201)) ([f063f2d](f063f2d)) * use rich to format logs ([#177](#177)) ([303054b](303054b)) ### Refactors * update react_agent to use tool_config ([#139](#139)) ([b5dce10](b5dce10)) ### Build System * Bump version to 0.1.19 ([ccd52f8](ccd52f8)) * Bump version to 0.1.8 ([0144e85](0144e85)) * Bump version to 0.1.9 ([7e00b78](7e00b78)) * Bump version to 0.2.10 ([05e41a6](05e41a6)) * Bump version to 0.2.11 ([d2e7537](d2e7537)) * Bump version to 0.2.12 ([e3d812e](e3d812e)) * Bump version to 0.2.13 ([b6c6c5e](b6c6c5e)) * Bump version to 0.2.2 ([47f8fd5](47f8fd5)) * Bump version to 0.2.4 ([7e6f5fc](7e6f5fc)) * Bump version to 0.2.5 ([62bd127](62bd127)) * Bump version to 0.2.6 ([3dd707f](3dd707f)) * Bump version to 0.2.7 ([e39ba88](e39ba88)) * Bump version to 0.2.8 ([645d219](645d219)) * Bump version to 0.2.9 ([d360557](d360557)) --- This pull request is managed by Stainless's [GitHub App](https://github.com/apps/stainless-app). The [semver version number](https://semver.org/#semantic-versioning-specification-semver) is based on included [commit messages](https://www.conventionalcommits.org/en/v1.0.0/). Alternatively, you can manually set the version number in the title of this pull request. For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request. 🔗 Stainless [website](https://www.stainlessapi.com) 📚 Read the [docs](https://app.stainlessapi.com/docs) 🙋 [Reach out](mailto:[email protected]) for help or questions --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com>
1 parent 7ade969 commit 5d1a68f

File tree

7 files changed

+183
-6
lines changed

7 files changed

+183
-6
lines changed

.gitignore

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
.prism.log
2-
.vscode
32
_dev
43

54
__pycache__
@@ -14,4 +13,3 @@ dist
1413
.envrc
1514
codegen.log
1615
Brewfile.lock.json
17-
.DS_Store

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.2.18-alpha.2"
2+
".": "0.2.18-alpha.3"
33
}

.stats.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 106
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-7c002d994b96113926e24a0f99ff80a52b937481e383b584496087ecdc2d92d6.yml
3-
openapi_spec_hash: e9c825e9199979fc5f754426a1334499
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-4f6633567c1a079df49d0cf58f37251a4bb0ee2f2a496ac83c9fee26eb325f9c.yml
3+
openapi_spec_hash: af5b3d3bbecf48f15c90b982ccac852e
44
config_hash: e67fd054e95c1e82f78f4b834e96bb65

.vscode/settings.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{
2+
"python.analysis.importFormat": "relative",
3+
}

CHANGELOG.md

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,95 @@
11
# Changelog
22

3+
## 0.2.18-alpha.3 (2025-08-14)
4+
5+
Full Changelog: [v0.2.18-alpha.2...v0.2.18-alpha.3](https://github.com/llamastack/llama-stack-client-python/compare/v0.2.18-alpha.2...v0.2.18-alpha.3)
6+
7+
### Features
8+
9+
* `llama-stack-client providers inspect PROVIDER_ID` ([#181](https://github.com/llamastack/llama-stack-client-python/issues/181)) ([6d18aae](https://github.com/llamastack/llama-stack-client-python/commit/6d18aae31ce739b1a37a72b880aa8a60f890df72))
10+
* add client-side utility for getting OAuth tokens simply ([#230](https://github.com/llamastack/llama-stack-client-python/issues/230)) ([91156dc](https://github.com/llamastack/llama-stack-client-python/commit/91156dca28567352c5f6be75d55327ef2b49ff19))
11+
* add client.chat.completions.create() and client.completions.create() ([#226](https://github.com/llamastack/llama-stack-client-python/issues/226)) ([ee0e65e](https://github.com/llamastack/llama-stack-client-python/commit/ee0e65e89dba13431cc3b9abdbebaa9525a5fbfb))
12+
* Add llama-stack-client datasets unregister command ([#222](https://github.com/llamastack/llama-stack-client-python/issues/222)) ([38cd91c](https://github.com/llamastack/llama-stack-client-python/commit/38cd91c9e396f2be0bec1ee96a19771582ba6f17))
13+
* add support for chat sessions ([#167](https://github.com/llamastack/llama-stack-client-python/issues/167)) ([ce3b30f](https://github.com/llamastack/llama-stack-client-python/commit/ce3b30f83eb122cc200c441ddad5e173e02e5adb))
14+
* add type hints to event logger util ([#140](https://github.com/llamastack/llama-stack-client-python/issues/140)) ([26f3c33](https://github.com/llamastack/llama-stack-client-python/commit/26f3c33cd0f81b809afa514b9a8ca63fa64643ca))
15+
* add updated batch inference types ([#220](https://github.com/llamastack/llama-stack-client-python/issues/220)) ([ddb93ca](https://github.com/llamastack/llama-stack-client-python/commit/ddb93ca206d97c82c51a0efed5985a7396fcdf3c))
16+
* add weighted_average aggregation function support ([#208](https://github.com/llamastack/llama-stack-client-python/issues/208)) ([b62ac6c](https://github.com/llamastack/llama-stack-client-python/commit/b62ac6cf2f2f20e248cbbce6684cef50f150cac0))
17+
* **agent:** support multiple tool calls ([#192](https://github.com/llamastack/llama-stack-client-python/issues/192)) ([43ea2f6](https://github.com/llamastack/llama-stack-client-python/commit/43ea2f6d741b26181db1d7ba0912c17a9ed1ca74))
18+
* **agent:** support plain function as client_tool ([#187](https://github.com/llamastack/llama-stack-client-python/issues/187)) ([2ec8044](https://github.com/llamastack/llama-stack-client-python/commit/2ec8044356b5d6285948ae22da007899f6148408))
19+
* **api:** update via SDK Studio ([48fd19c](https://github.com/llamastack/llama-stack-client-python/commit/48fd19caff46f4ea58cdcfb402e056ccadd096b8))
20+
* async agent wrapper ([#169](https://github.com/llamastack/llama-stack-client-python/issues/169)) ([fc9907c](https://github.com/llamastack/llama-stack-client-python/commit/fc9907c781dc406756c20d8a1829343eac0c31c0))
21+
* autogen llama-stack-client CLI reference doc ([#190](https://github.com/llamastack/llama-stack-client-python/issues/190)) ([e7b19a5](https://github.com/llamastack/llama-stack-client-python/commit/e7b19a505cc06c28846e85bb5b8524632bdef4d6))
22+
* client.responses.create() and client.responses.retrieve() ([#227](https://github.com/llamastack/llama-stack-client-python/issues/227)) ([fba5102](https://github.com/llamastack/llama-stack-client-python/commit/fba5102d03f85627025f4589216651d135841d5a))
23+
* datasets api updates ([#203](https://github.com/llamastack/llama-stack-client-python/issues/203)) ([b664564](https://github.com/llamastack/llama-stack-client-python/commit/b664564fe1c4771a7872286d0c2ac96c47816939))
24+
* enable_persist: sync updates from stainless branch: yanxi0830/dev ([#145](https://github.com/llamastack/llama-stack-client-python/issues/145)) ([59a02f0](https://github.com/llamastack/llama-stack-client-python/commit/59a02f071b14cb6627c929c4d396a3d996219c78))
25+
* new Agent API ([#178](https://github.com/llamastack/llama-stack-client-python/issues/178)) ([c2f73b1](https://github.com/llamastack/llama-stack-client-python/commit/c2f73b11301c6c4a87e58ded9055fd49b1626b47))
26+
* support client tool output metadata ([#180](https://github.com/llamastack/llama-stack-client-python/issues/180)) ([8e4fd56](https://github.com/llamastack/llama-stack-client-python/commit/8e4fd56a318a2806e81679877d703f6270fbcbfe))
27+
* Sync updates from stainless branch: ehhuang/dev ([#149](https://github.com/llamastack/llama-stack-client-python/issues/149)) ([367da69](https://github.com/llamastack/llama-stack-client-python/commit/367da690dabee8a34039499f8e151cc8f97ca91b))
28+
* unify max infer iters with server/client tools ([#173](https://github.com/llamastack/llama-stack-client-python/issues/173)) ([548f2de](https://github.com/llamastack/llama-stack-client-python/commit/548f2dee5019b7510d17025f11adbf61431f505e))
29+
* update react with new agent api ([#189](https://github.com/llamastack/llama-stack-client-python/issues/189)) ([ac9d1e2](https://github.com/llamastack/llama-stack-client-python/commit/ac9d1e2166c88d2445fbbf08e30886fcec6048df))
30+
31+
32+
### Bug Fixes
33+
34+
* `llama-stack-client provider inspect` should use retrieve ([#202](https://github.com/llamastack/llama-stack-client-python/issues/202)) ([e33b5bf](https://github.com/llamastack/llama-stack-client-python/commit/e33b5bfbc89c93031434720cf7265f9bc83f2a39))
35+
* accept extra_headers in agent.create_turn and pass them faithfully ([#228](https://github.com/llamastack/llama-stack-client-python/issues/228)) ([e72d9e8](https://github.com/llamastack/llama-stack-client-python/commit/e72d9e8eb590facd693938a93a7a782e45d15b6d))
36+
* added uv.lock ([546e0df](https://github.com/llamastack/llama-stack-client-python/commit/546e0df348b648651da94989053c52f4cc43cdc4))
37+
* **agent:** better error handling ([#207](https://github.com/llamastack/llama-stack-client-python/issues/207)) ([5746f91](https://github.com/llamastack/llama-stack-client-python/commit/5746f918351f9021700f0a90edf6b78e74d58c82))
38+
* **agent:** initialize toolgroups/client_tools ([#186](https://github.com/llamastack/llama-stack-client-python/issues/186)) ([458e207](https://github.com/llamastack/llama-stack-client-python/commit/458e20702b5aa8f435ac5ce114fee9252b751d25))
39+
* broken .retrieve call using `identifier=` ([#135](https://github.com/llamastack/llama-stack-client-python/issues/135)) ([626805a](https://github.com/llamastack/llama-stack-client-python/commit/626805a74a19011d742a60187b1119aead153a94))
40+
* bump to 0.2.1 ([edb6173](https://github.com/llamastack/llama-stack-client-python/commit/edb6173ec1f0da131e097a993d6f177a3655930d))
41+
* bump version ([b6d45b8](https://github.com/llamastack/llama-stack-client-python/commit/b6d45b862ca846bed635d64816dc7de9d9433e61))
42+
* bump version in another place ([7253433](https://github.com/llamastack/llama-stack-client-python/commit/7253433f6d7a41fe0812d26e4ce7183f922f2869))
43+
* **cli:** align cli toolgroups register to the new arguments ([#231](https://github.com/llamastack/llama-stack-client-python/issues/231)) ([a87b6f7](https://github.com/llamastack/llama-stack-client-python/commit/a87b6f7b3fd07262bfbd4321652e51b901c75df5))
44+
* correct toolgroups_id parameter name on unregister call ([#235](https://github.com/llamastack/llama-stack-client-python/issues/235)) ([1be7904](https://github.com/llamastack/llama-stack-client-python/commit/1be7904133630127c0a98ba4aed1241eee548c81))
45+
* fix duplicate model get help text ([#188](https://github.com/llamastack/llama-stack-client-python/issues/188)) ([4bab07a](https://github.com/llamastack/llama-stack-client-python/commit/4bab07a683adee9a476ce926fe809dafe3cc27f0))
46+
* llama-stack-client providers list ([#134](https://github.com/llamastack/llama-stack-client-python/issues/134)) ([930138a](https://github.com/llamastack/llama-stack-client-python/commit/930138a9013ee9157d14ee0606b24c5677bf4387))
47+
* react agent ([#200](https://github.com/llamastack/llama-stack-client-python/issues/200)) ([b779979](https://github.com/llamastack/llama-stack-client-python/commit/b779979c40c638e835e5190e5877f57430c89d97))
48+
* React Agent for non-llama models ([#174](https://github.com/llamastack/llama-stack-client-python/issues/174)) ([ee5dd2b](https://github.com/llamastack/llama-stack-client-python/commit/ee5dd2b662ffdeb78b324dddd6884a4d0f1fd901))
49+
* React agent should be able to work with provided config ([#146](https://github.com/llamastack/llama-stack-client-python/issues/146)) ([08ab5df](https://github.com/llamastack/llama-stack-client-python/commit/08ab5df583bb74dea9104950c190f6101eb19c95))
50+
* react agent with custom tool parser n_iters ([#184](https://github.com/llamastack/llama-stack-client-python/issues/184)) ([aaff961](https://github.com/llamastack/llama-stack-client-python/commit/aaff9618601f1cded040e57e0d8067699e595208))
51+
* remove the alpha suffix in run_benchmark.py ([#179](https://github.com/llamastack/llama-stack-client-python/issues/179)) ([638f7f2](https://github.com/llamastack/llama-stack-client-python/commit/638f7f29513cdb87b9bf0cf7bc269d2c576d37ba))
52+
* update CONTRIBUTING.md to point to uv instead of rye ([3fbe0cd](https://github.com/llamastack/llama-stack-client-python/commit/3fbe0cdd6a8e935732ddc513b0a6af01623a6999))
53+
* update uv lock ([cc072c8](https://github.com/llamastack/llama-stack-client-python/commit/cc072c81b59c26f21eaba6ee0a7d56fc61c0317a))
54+
* validate endpoint url ([#196](https://github.com/llamastack/llama-stack-client-python/issues/196)) ([6fa8095](https://github.com/llamastack/llama-stack-client-python/commit/6fa8095428804a9cc348b403468cad64e4eeb38b))
55+
56+
57+
### Chores
58+
59+
* api sync, deprecate allow_resume_turn + rename task_config-&gt;benchmark_config (Sync updates from stainless branch: yanxi0830/dev) ([#176](https://github.com/llamastack/llama-stack-client-python/issues/176)) ([96749af](https://github.com/llamastack/llama-stack-client-python/commit/96749af83891d47be1f8f46588be567db685cf12))
60+
* AsyncAgent should use ToolResponse instead of ToolResponseMessage ([#197](https://github.com/llamastack/llama-stack-client-python/issues/197)) ([6191aa5](https://github.com/llamastack/llama-stack-client-python/commit/6191aa5cc38c4ef9be27452e04867b6ce8a703e2))
61+
* **copy:** Copy changes over from llamastack/ org repository ([#255](https://github.com/llamastack/llama-stack-client-python/issues/255)) ([7ade969](https://github.com/llamastack/llama-stack-client-python/commit/7ade96987294cfd164d710befb15943fd8f8bb8b))
62+
* deprecate eval task (Sync updates from stainless branch: main) ([#150](https://github.com/llamastack/llama-stack-client-python/issues/150)) ([39b1248](https://github.com/llamastack/llama-stack-client-python/commit/39b1248e3e1b0634e96db6bb4eac7d689e3a5a19))
63+
* remove litellm type conversion ([#193](https://github.com/llamastack/llama-stack-client-python/issues/193)) ([ab3f844](https://github.com/llamastack/llama-stack-client-python/commit/ab3f844a8a7a8dc68723ed36120914fd01a18af2))
64+
* sync repo ([099bfc6](https://github.com/llamastack/llama-stack-client-python/commit/099bfc66cdc115e857d5cfba675a090148619c92))
65+
* Sync updates from stainless branch: ehhuang/dev ([#182](https://github.com/llamastack/llama-stack-client-python/issues/182)) ([e33aa4a](https://github.com/llamastack/llama-stack-client-python/commit/e33aa4a682fda23d708438a976dfe4dd5443a320))
66+
* Sync updates from stainless branch: ehhuang/dev ([#199](https://github.com/llamastack/llama-stack-client-python/issues/199)) ([fa73d7d](https://github.com/llamastack/llama-stack-client-python/commit/fa73d7ddb72682d47464eca6b1476044e140a560))
67+
* Sync updates from stainless branch: main ([#201](https://github.com/llamastack/llama-stack-client-python/issues/201)) ([f063f2d](https://github.com/llamastack/llama-stack-client-python/commit/f063f2d6126d2bd1f9a8dcf854a32ae7cd4be607))
68+
* use rich to format logs ([#177](https://github.com/llamastack/llama-stack-client-python/issues/177)) ([303054b](https://github.com/llamastack/llama-stack-client-python/commit/303054b6a64e47dbdf7de93458433b71bb1ff59c))
69+
70+
71+
### Refactors
72+
73+
* update react_agent to use tool_config ([#139](https://github.com/llamastack/llama-stack-client-python/issues/139)) ([b5dce10](https://github.com/llamastack/llama-stack-client-python/commit/b5dce10f0a621f8f8a0f893dba4d2acebd7e438b))
74+
75+
76+
### Build System
77+
78+
* Bump version to 0.1.19 ([ccd52f8](https://github.com/llamastack/llama-stack-client-python/commit/ccd52f8bb298ecfd3ec06ae2d50ccaeebbfb3973))
79+
* Bump version to 0.1.8 ([0144e85](https://github.com/llamastack/llama-stack-client-python/commit/0144e857c83afc807122b32f3f53775e87c027ac))
80+
* Bump version to 0.1.9 ([7e00b78](https://github.com/llamastack/llama-stack-client-python/commit/7e00b784ee859aa04aa11955e3888e5167331dfe))
81+
* Bump version to 0.2.10 ([05e41a6](https://github.com/llamastack/llama-stack-client-python/commit/05e41a6eb12053b850a3abc56bb35e3121042be2))
82+
* Bump version to 0.2.11 ([d2e7537](https://github.com/llamastack/llama-stack-client-python/commit/d2e753751519cb9f0e09d255e875f60449ab30aa))
83+
* Bump version to 0.2.12 ([e3d812e](https://github.com/llamastack/llama-stack-client-python/commit/e3d812ee3a85949e31e448e68c03534225b4ed07))
84+
* Bump version to 0.2.13 ([b6c6c5e](https://github.com/llamastack/llama-stack-client-python/commit/b6c6c5ed7940bb625665d50f88ff7ea9d734e100))
85+
* Bump version to 0.2.2 ([47f8fd5](https://github.com/llamastack/llama-stack-client-python/commit/47f8fd568634c9e2f7cd7d86f92f7c43cfc448cd))
86+
* Bump version to 0.2.4 ([7e6f5fc](https://github.com/llamastack/llama-stack-client-python/commit/7e6f5fce18f23b807e52ac173251687c3b58979b))
87+
* Bump version to 0.2.5 ([62bd127](https://github.com/llamastack/llama-stack-client-python/commit/62bd12799d8a4a0261d200d1c869e2be98c38770))
88+
* Bump version to 0.2.6 ([3dd707f](https://github.com/llamastack/llama-stack-client-python/commit/3dd707fb84ba2ce56151cec9fb30918c651ccdd9))
89+
* Bump version to 0.2.7 ([e39ba88](https://github.com/llamastack/llama-stack-client-python/commit/e39ba882f9d1f635f5e7398f623d7ceeae1b446f))
90+
* Bump version to 0.2.8 ([645d219](https://github.com/llamastack/llama-stack-client-python/commit/645d2195c5af1c6f903cb93c293319d8f94c36cc))
91+
* Bump version to 0.2.9 ([d360557](https://github.com/llamastack/llama-stack-client-python/commit/d36055741dd5c152c629dc28ec3b88b2c78f5336))
92+
393
## 0.2.18-alpha.2 (2025-08-12)
494

595
Full Changelog: [v0.2.18-alpha.1...v0.2.18-alpha.2](https://github.com/llamastack/llama-stack-client-python/compare/v0.2.18-alpha.1...v0.2.18-alpha.2)

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "llama_stack_client"
3-
version = "0.2.18-alpha.2"
3+
version = "0.2.18-alpha.3"
44
description = "The official Python library for the llama-stack-client API"
55
dynamic = ["readme"]
66
license = "MIT"

src/llama_stack_client/types/response_object_stream.py

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,14 @@
6363
"OpenAIResponseObjectStreamResponseMcpCallInProgress",
6464
"OpenAIResponseObjectStreamResponseMcpCallFailed",
6565
"OpenAIResponseObjectStreamResponseMcpCallCompleted",
66+
"OpenAIResponseObjectStreamResponseContentPartAdded",
67+
"OpenAIResponseObjectStreamResponseContentPartAddedPart",
68+
"OpenAIResponseObjectStreamResponseContentPartAddedPartOpenAIResponseContentPartOutputText",
69+
"OpenAIResponseObjectStreamResponseContentPartAddedPartOpenAIResponseContentPartRefusal",
70+
"OpenAIResponseObjectStreamResponseContentPartDone",
71+
"OpenAIResponseObjectStreamResponseContentPartDonePart",
72+
"OpenAIResponseObjectStreamResponseContentPartDonePartOpenAIResponseContentPartOutputText",
73+
"OpenAIResponseObjectStreamResponseContentPartDonePartOpenAIResponseContentPartRefusal",
6674
"OpenAIResponseObjectStreamResponseCompleted",
6775
]
6876

@@ -813,6 +821,82 @@ class OpenAIResponseObjectStreamResponseMcpCallCompleted(BaseModel):
813821
"""Event type identifier, always "response.mcp_call.completed" """
814822

815823

824+
class OpenAIResponseObjectStreamResponseContentPartAddedPartOpenAIResponseContentPartOutputText(BaseModel):
825+
text: str
826+
827+
type: Literal["output_text"]
828+
829+
830+
class OpenAIResponseObjectStreamResponseContentPartAddedPartOpenAIResponseContentPartRefusal(BaseModel):
831+
refusal: str
832+
833+
type: Literal["refusal"]
834+
835+
836+
OpenAIResponseObjectStreamResponseContentPartAddedPart: TypeAlias = Annotated[
837+
Union[
838+
OpenAIResponseObjectStreamResponseContentPartAddedPartOpenAIResponseContentPartOutputText,
839+
OpenAIResponseObjectStreamResponseContentPartAddedPartOpenAIResponseContentPartRefusal,
840+
],
841+
PropertyInfo(discriminator="type"),
842+
]
843+
844+
845+
class OpenAIResponseObjectStreamResponseContentPartAdded(BaseModel):
846+
item_id: str
847+
"""Unique identifier of the output item containing this content part"""
848+
849+
part: OpenAIResponseObjectStreamResponseContentPartAddedPart
850+
"""The content part that was added"""
851+
852+
response_id: str
853+
"""Unique identifier of the response containing this content"""
854+
855+
sequence_number: int
856+
"""Sequential number for ordering streaming events"""
857+
858+
type: Literal["response.content_part.added"]
859+
"""Event type identifier, always "response.content_part.added" """
860+
861+
862+
class OpenAIResponseObjectStreamResponseContentPartDonePartOpenAIResponseContentPartOutputText(BaseModel):
863+
text: str
864+
865+
type: Literal["output_text"]
866+
867+
868+
class OpenAIResponseObjectStreamResponseContentPartDonePartOpenAIResponseContentPartRefusal(BaseModel):
869+
refusal: str
870+
871+
type: Literal["refusal"]
872+
873+
874+
OpenAIResponseObjectStreamResponseContentPartDonePart: TypeAlias = Annotated[
875+
Union[
876+
OpenAIResponseObjectStreamResponseContentPartDonePartOpenAIResponseContentPartOutputText,
877+
OpenAIResponseObjectStreamResponseContentPartDonePartOpenAIResponseContentPartRefusal,
878+
],
879+
PropertyInfo(discriminator="type"),
880+
]
881+
882+
883+
class OpenAIResponseObjectStreamResponseContentPartDone(BaseModel):
884+
item_id: str
885+
"""Unique identifier of the output item containing this content part"""
886+
887+
part: OpenAIResponseObjectStreamResponseContentPartDonePart
888+
"""The completed content part"""
889+
890+
response_id: str
891+
"""Unique identifier of the response containing this content"""
892+
893+
sequence_number: int
894+
"""Sequential number for ordering streaming events"""
895+
896+
type: Literal["response.content_part.done"]
897+
"""Event type identifier, always "response.content_part.done" """
898+
899+
816900
class OpenAIResponseObjectStreamResponseCompleted(BaseModel):
817901
response: ResponseObject
818902
"""The completed response object"""
@@ -841,6 +925,8 @@ class OpenAIResponseObjectStreamResponseCompleted(BaseModel):
841925
OpenAIResponseObjectStreamResponseMcpCallInProgress,
842926
OpenAIResponseObjectStreamResponseMcpCallFailed,
843927
OpenAIResponseObjectStreamResponseMcpCallCompleted,
928+
OpenAIResponseObjectStreamResponseContentPartAdded,
929+
OpenAIResponseObjectStreamResponseContentPartDone,
844930
OpenAIResponseObjectStreamResponseCompleted,
845931
],
846932
PropertyInfo(discriminator="type"),

0 commit comments

Comments
 (0)