Skip to content

release: 1.97.1 #2482

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.97.0"
".": "1.97.1"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 111
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-670ea0d2cc44f52a87dd3cadea45632953283e0636ba30788fdbdb22a232ccac.yml
openapi_spec_hash: d8b7d38911fead545adf3e4297956410
config_hash: 5525bda35e48ea6387c6175c4d1651fa
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-b2a451656ca64d30d174391ebfd94806b4de3ab76dc55b92843cfb7f1a54ecb6.yml
openapi_spec_hash: 27d9691b400f28c17ef063a1374048b0
config_hash: e822d0c9082c8b312264403949243179
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

## 1.97.1 (2025-07-22)

Full Changelog: [v1.97.0...v1.97.1](https://github.com/openai/openai-python/compare/v1.97.0...v1.97.1)

### Bug Fixes

* **parsing:** ignore empty metadata ([58c359f](https://github.com/openai/openai-python/commit/58c359ff67fd6103268e4405600fd58844b6f27b))
* **parsing:** parse extra field types ([d524b7e](https://github.com/openai/openai-python/commit/d524b7e201418ccc9b5c2206da06d1be011808e5))


### Chores

* **api:** event shapes more accurate ([f3a9a92](https://github.com/openai/openai-python/commit/f3a9a9229280ecb7e0b2779dd44290df6d9824ef))

## 1.97.0 (2025-07-16)

Full Changelog: [v1.96.1...v1.97.0](https://github.com/openai/openai-python/compare/v1.96.1...v1.97.0)
Expand Down
2 changes: 0 additions & 2 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -791,8 +791,6 @@ from openai.types.responses import (
ResponseOutputTextAnnotationAddedEvent,
ResponsePrompt,
ResponseQueuedEvent,
ResponseReasoningDeltaEvent,
ResponseReasoningDoneEvent,
ResponseReasoningItem,
ResponseReasoningSummaryDeltaEvent,
ResponseReasoningSummaryDoneEvent,
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.97.0"
version = "1.97.1"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
27 changes: 24 additions & 3 deletions src/openai/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,14 +233,18 @@ def construct( # pyright: ignore[reportIncompatibleMethodOverride]
else:
fields_values[name] = field_get_default(field)

extra_field_type = _get_extra_fields_type(__cls)

_extra = {}
for key, value in values.items():
if key not in model_fields:
parsed = construct_type(value=value, type_=extra_field_type) if extra_field_type is not None else value

if PYDANTIC_V2:
_extra[key] = value
_extra[key] = parsed
else:
_fields_set.add(key)
fields_values[key] = value
fields_values[key] = parsed

object.__setattr__(m, "__dict__", fields_values)

Expand Down Expand Up @@ -395,6 +399,23 @@ def _construct_field(value: object, field: FieldInfo, key: str) -> object:
return construct_type(value=value, type_=type_, metadata=getattr(field, "metadata", None))


def _get_extra_fields_type(cls: type[pydantic.BaseModel]) -> type | None:
if not PYDANTIC_V2:
# TODO
return None

schema = cls.__pydantic_core_schema__
if schema["type"] == "model":
fields = schema["schema"]
if fields["type"] == "model-fields":
extras = fields.get("extras_schema")
if extras and "cls" in extras:
# mypy can't narrow the type
return extras["cls"] # type: ignore[no-any-return]

return None


def is_basemodel(type_: type) -> bool:
"""Returns whether or not the given type is either a `BaseModel` or a union of `BaseModel`"""
if is_union(type_):
Expand Down Expand Up @@ -464,7 +485,7 @@ def construct_type(*, value: object, type_: object, metadata: Optional[List[Any]
type_ = type_.__value__ # type: ignore[unreachable]

# unwrap `Annotated[T, ...]` -> `T`
if metadata is not None:
if metadata is not None and len(metadata) > 0:
meta: tuple[Any, ...] = tuple(metadata)
elif is_annotated_type(type_):
meta = get_args(type_)[1:]
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.97.0" # x-release-please-version
__version__ = "1.97.1" # x-release-please-version
4 changes: 0 additions & 4 deletions src/openai/lib/streaming/responses/_events.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,7 @@
ResponseRefusalDoneEvent,
ResponseRefusalDeltaEvent,
ResponseMcpCallFailedEvent,
ResponseReasoningDoneEvent,
ResponseOutputItemDoneEvent,
ResponseReasoningDeltaEvent,
ResponseContentPartDoneEvent,
ResponseOutputItemAddedEvent,
ResponseContentPartAddedEvent,
Expand Down Expand Up @@ -139,10 +137,8 @@ class ResponseCompletedEvent(RawResponseCompletedEvent, GenericModel, Generic[Te
ResponseMcpListToolsInProgressEvent,
ResponseOutputTextAnnotationAddedEvent,
ResponseQueuedEvent,
ResponseReasoningDeltaEvent,
ResponseReasoningSummaryDeltaEvent,
ResponseReasoningSummaryDoneEvent,
ResponseReasoningDoneEvent,
],
PropertyInfo(discriminator="type"),
]
2 changes: 2 additions & 0 deletions src/openai/lib/streaming/responses/_responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,7 @@ def handle_event(self, event: RawResponseStreamEvent) -> List[ResponseStreamEven
item_id=event.item_id,
output_index=event.output_index,
sequence_number=event.sequence_number,
logprobs=event.logprobs,
type="response.output_text.delta",
snapshot=content.text,
)
Expand All @@ -282,6 +283,7 @@ def handle_event(self, event: RawResponseStreamEvent) -> List[ResponseStreamEven
item_id=event.item_id,
output_index=event.output_index,
sequence_number=event.sequence_number,
logprobs=event.logprobs,
type="response.output_text.done",
text=event.text,
parsed=parse_text(event.text, text_format=self._text_format),
Expand Down
8 changes: 2 additions & 6 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,7 @@ def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
],
voice: Union[str, Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"]],
instructions: str | NotGiven = NOT_GIVEN,
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -146,9 +144,7 @@ async def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
],
voice: Union[str, Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"]],
instructions: str | NotGiven = NOT_GIVEN,
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
Expand Down
14 changes: 4 additions & 10 deletions src/openai/resources/beta/realtime/sessions.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,7 @@ def create(
tools: Iterable[session_create_params.Tool] | NotGiven = NOT_GIVEN,
tracing: session_create_params.Tracing | NotGiven = NOT_GIVEN,
turn_detection: session_create_params.TurnDetection | NotGiven = NOT_GIVEN,
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
]
voice: Union[str, Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"]]
| NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
Expand Down Expand Up @@ -163,8 +161,7 @@ def create(

voice: The voice the model uses to respond. Voice cannot be changed during the session
once the model has responded with audio at least once. Current voice options are
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`,
`shimmer`, and `verse`.
`alloy`, `ash`, `ballad`, `coral`, `echo`, `sage`, `shimmer`, and `verse`.

extra_headers: Send extra headers

Expand Down Expand Up @@ -251,9 +248,7 @@ async def create(
tools: Iterable[session_create_params.Tool] | NotGiven = NOT_GIVEN,
tracing: session_create_params.Tracing | NotGiven = NOT_GIVEN,
turn_detection: session_create_params.TurnDetection | NotGiven = NOT_GIVEN,
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
]
voice: Union[str, Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"]]
| NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
Expand Down Expand Up @@ -348,8 +343,7 @@ async def create(

voice: The voice the model uses to respond. Voice cannot be changed during the session
once the model has responded with audio at least once. Current voice options are
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`,
`shimmer`, and `verse`.
`alloy`, `ash`, `ballad`, `coral`, `echo`, `sage`, `shimmer`, and `verse`.

extra_headers: Send extra headers

Expand Down
12 changes: 6 additions & 6 deletions src/openai/resources/chat/completions/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -417,7 +417,7 @@ def create(
- If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project
will use 'default'.
- If set to 'default', then the requset will be processed with the standard
- If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
- If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -697,7 +697,7 @@ def create(
- If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project
will use 'default'.
- If set to 'default', then the requset will be processed with the standard
- If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
- If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -968,7 +968,7 @@ def create(
- If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project
will use 'default'.
- If set to 'default', then the requset will be processed with the standard
- If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
- If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -1784,7 +1784,7 @@ async def create(
- If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project
will use 'default'.
- If set to 'default', then the requset will be processed with the standard
- If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
- If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -2064,7 +2064,7 @@ async def create(
- If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project
will use 'default'.
- If set to 'default', then the requset will be processed with the standard
- If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
- If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -2335,7 +2335,7 @@ async def create(
- If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project
will use 'default'.
- If set to 'default', then the requset will be processed with the standard
- If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
- If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
'priority', then the request will be processed with the corresponding service
Expand Down
Loading