Skip to content

release: 2.1.0 #481

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "2.0.2"
".": "2.1.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 97
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-6a1bfd4738fff02ef5becc3fdb2bf0cd6c026f2c924d4147a2a515474477dd9a.yml
openapi_spec_hash: 3eb8d86c06f0bb5e1190983e5acfc9ba
config_hash: a67c5e195a59855fe8a5db0dc61a3e7f
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-24be531010b354303d741fc9247c1f84f75978f9f7de68aca92cb4f240a04722.yml
openapi_spec_hash: 3e46f439f6a863beadc71577eb4efa15
config_hash: ed87b9139ac595a04a2162d754df2fed
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# Changelog

## 2.1.0 (2025-08-18)

Full Changelog: [v2.0.2...v2.1.0](https://github.com/openai/openai-go/compare/v2.0.2...v2.1.0)

### Features

* **api:** add new text parameters, expiration options ([323154c](https://github.com/openai/openai-go/commit/323154ccec2facf80d9ada76ed3c35553cb8896d))


### Documentation

* give https its missing "h" in Azure OpenAI REST API link ([#480](https://github.com/openai/openai-go/issues/480)) ([8a401c9](https://github.com/openai/openai-go/commit/8a401c9eecbe4936de487447be09757859001009))

## 2.0.2 (2025-08-09)

Full Changelog: [v2.0.1...v2.0.2](https://github.com/openai/openai-go/compare/v2.0.1...v2.0.2)
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Or to pin the version:
<!-- x-release-please-start-version -->

```sh
go get -u 'github.com/openai/openai-go@v2.0.2'
go get -u 'github.com/openai/openai-go@v2.1.0'
```

<!-- x-release-please-end -->
Expand Down Expand Up @@ -911,7 +911,7 @@ func main() {
const azureOpenAIEndpoint = "https://<azure-openai-resource>.openai.azure.com"

// The latest API versions, including previews, can be found here:
// ttps://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versionng
// https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versionng
const azureOpenAIAPIVersion = "2024-06-01"

tokenCredential, err := azidentity.NewDefaultAzureCredential(nil)
Expand Down
4 changes: 2 additions & 2 deletions aliases.go
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ type FunctionParameters = shared.FunctionParameters
// This is an alias to an internal type.
type Metadata = shared.Metadata

// **o-series models only**
// **gpt-5 and o-series models only**
//
// Configuration options for
// [reasoning models](https://platform.openai.com/docs/guides/reasoning).
Expand Down Expand Up @@ -382,7 +382,7 @@ const ReasoningSummaryConcise = shared.ReasoningSummaryConcise
// Equals "detailed"
const ReasoningSummaryDetailed = shared.ReasoningSummaryDetailed

// **o-series models only**
// **gpt-5 and o-series models only**
//
// Configuration options for
// [reasoning models](https://platform.openai.com/docs/guides/reasoning).
Expand Down
2 changes: 0 additions & 2 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -648,7 +648,6 @@ Params Types:
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseOutputTextParam">ResponseOutputTextParam</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponsePromptParam">ResponsePromptParam</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseReasoningItemParam">ResponseReasoningItemParam</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseTextConfigParam">ResponseTextConfigParam</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ToolUnionParam">ToolUnionParam</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ToolChoiceAllowedParam">ToolChoiceAllowedParam</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ToolChoiceCustomParam">ToolChoiceCustomParam</a>
Expand Down Expand Up @@ -744,7 +743,6 @@ Response Types:
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseRefusalDoneEvent">ResponseRefusalDoneEvent</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseStatus">ResponseStatus</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseStreamEventUnion">ResponseStreamEventUnion</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseTextConfig">ResponseTextConfig</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseTextDeltaEvent">ResponseTextDeltaEvent</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseTextDoneEvent">ResponseTextDoneEvent</a>
- <a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses">responses</a>.<a href="https://pkg.go.dev/github.com/openai/openai-go/v2/responses#ResponseUsage">ResponseUsage</a>
Expand Down
28 changes: 28 additions & 0 deletions batch.go
Original file line number Diff line number Diff line change
Expand Up @@ -290,6 +290,9 @@ type BatchNewParams struct {
// Keys are strings with a maximum length of 64 characters. Values are strings with
// a maximum length of 512 characters.
Metadata shared.Metadata `json:"metadata,omitzero"`
// The expiration policy for the output and/or error file that are generated for a
// batch.
OutputExpiresAfter BatchNewParamsOutputExpiresAfter `json:"output_expires_after,omitzero"`
paramObj
}

Expand Down Expand Up @@ -322,6 +325,31 @@ const (
BatchNewParamsEndpointV1Completions BatchNewParamsEndpoint = "/v1/completions"
)

// The expiration policy for the output and/or error file that are generated for a
// batch.
//
// The properties Anchor, Seconds are required.
type BatchNewParamsOutputExpiresAfter struct {
// The number of seconds after the anchor time that the file will expire. Must be
// between 3600 (1 hour) and 2592000 (30 days).
Seconds int64 `json:"seconds,required"`
// Anchor timestamp after which the expiration policy applies. Supported anchors:
// `created_at`. Note that the anchor is the file creation time, not the time the
// batch is created.
//
// This field can be elided, and will marshal its zero value as "created_at".
Anchor constant.CreatedAt `json:"anchor,required"`
paramObj
}

func (r BatchNewParamsOutputExpiresAfter) MarshalJSON() (data []byte, err error) {
type shadow BatchNewParamsOutputExpiresAfter
return param.MarshalObject(r, (*shadow)(&r))
}
func (r *BatchNewParamsOutputExpiresAfter) UnmarshalJSON(data []byte) error {
return apijson.UnmarshalRoot(data, r)
}

type BatchListParams struct {
// A cursor for use in pagination. `after` is an object ID that defines your place
// in the list. For instance, if you make a list request and receive 100 objects,
Expand Down
3 changes: 3 additions & 0 deletions batch_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,9 @@ func TestBatchNewWithOptionalParams(t *testing.T) {
Metadata: shared.Metadata{
"foo": "string",
},
OutputExpiresAfter: openai.BatchNewParamsOutputExpiresAfter{
Seconds: 3600,
},
})
if err != nil {
var apierr *openai.Error
Expand Down
4 changes: 2 additions & 2 deletions betathread.go
Original file line number Diff line number Diff line change
Expand Up @@ -1065,7 +1065,7 @@ type BetaThreadNewAndRunParams struct {
// modifying the behavior on a per-run basis.
Tools []AssistantToolUnionParam `json:"tools,omitzero"`
// Controls for how a thread will be truncated prior to the run. Use this to
// control the intial context window of the run.
// control the initial context window of the run.
TruncationStrategy BetaThreadNewAndRunParamsTruncationStrategy `json:"truncation_strategy,omitzero"`
// Specifies the format that the model must output. Compatible with
// [GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
Expand Down Expand Up @@ -1532,7 +1532,7 @@ func (r *BetaThreadNewAndRunParamsToolResourcesFileSearch) UnmarshalJSON(data []
}

// Controls for how a thread will be truncated prior to the run. Use this to
// control the intial context window of the run.
// control the initial context window of the run.
//
// The property Type is required.
type BetaThreadNewAndRunParamsTruncationStrategy struct {
Expand Down
8 changes: 4 additions & 4 deletions betathreadrun.go
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ type Run struct {
// this run.
Tools []AssistantToolUnion `json:"tools,required"`
// Controls for how a thread will be truncated prior to the run. Use this to
// control the intial context window of the run.
// control the initial context window of the run.
TruncationStrategy RunTruncationStrategy `json:"truncation_strategy,required"`
// Usage statistics related to the run. This value will be `null` if the run is not
// in a terminal state (i.e. `in_progress`, `queued`, etc.).
Expand Down Expand Up @@ -499,7 +499,7 @@ func (r *RunRequiredActionSubmitToolOutputs) UnmarshalJSON(data []byte) error {
}

// Controls for how a thread will be truncated prior to the run. Use this to
// control the intial context window of the run.
// control the initial context window of the run.
type RunTruncationStrategy struct {
// The truncation strategy to use for the thread. The default is `auto`. If set to
// `last_messages`, the thread will be truncated to the n most recent messages in
Expand Down Expand Up @@ -633,7 +633,7 @@ type BetaThreadRunNewParams struct {
// modifying the behavior on a per-run basis.
Tools []AssistantToolUnionParam `json:"tools,omitzero"`
// Controls for how a thread will be truncated prior to the run. Use this to
// control the intial context window of the run.
// control the initial context window of the run.
TruncationStrategy BetaThreadRunNewParamsTruncationStrategy `json:"truncation_strategy,omitzero"`
// A list of additional fields to include in the response. Currently the only
// supported value is `step_details.tool_calls[*].file_search.results[*].content`
Expand Down Expand Up @@ -837,7 +837,7 @@ func (r *BetaThreadRunNewParamsAdditionalMessageAttachmentToolFileSearch) Unmars
}

// Controls for how a thread will be truncated prior to the run. Use this to
// control the intial context window of the run.
// control the initial context window of the run.
//
// The property Type is required.
type BetaThreadRunNewParamsTruncationStrategy struct {
Expand Down
59 changes: 41 additions & 18 deletions chatcompletion.go
Original file line number Diff line number Diff line change
Expand Up @@ -183,9 +183,8 @@ type ChatCompletion struct {
// - If set to 'default', then the request will be processed with the standard
// pricing and performance for the selected model.
// - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
// 'priority', then the request will be processed with the corresponding service
// tier. [Contact sales](https://openai.com/contact-sales) to learn more about
// Priority processing.
// '[priority](https://openai.com/api-priority-processing/)', then the request
// will be processed with the corresponding service tier.
// - When not set, the default behavior is 'auto'.
//
// When the `service_tier` parameter is set, the response body will include the
Expand All @@ -199,6 +198,8 @@ type ChatCompletion struct {
//
// Can be used in conjunction with the `seed` request parameter to understand when
// backend changes have been made that might impact determinism.
//
// Deprecated: deprecated
SystemFingerprint string `json:"system_fingerprint"`
// Usage statistics for the completion request.
Usage CompletionUsage `json:"usage"`
Expand Down Expand Up @@ -285,9 +286,8 @@ func (r *ChatCompletionChoiceLogprobs) UnmarshalJSON(data []byte) error {
// - If set to 'default', then the request will be processed with the standard
// pricing and performance for the selected model.
// - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
// 'priority', then the request will be processed with the corresponding service
// tier. [Contact sales](https://openai.com/contact-sales) to learn more about
// Priority processing.
// '[priority](https://openai.com/api-priority-processing/)', then the request
// will be processed with the corresponding service tier.
// - When not set, the default behavior is 'auto'.
//
// When the `service_tier` parameter is set, the response body will include the
Expand Down Expand Up @@ -598,9 +598,8 @@ type ChatCompletionChunk struct {
// - If set to 'default', then the request will be processed with the standard
// pricing and performance for the selected model.
// - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
// 'priority', then the request will be processed with the corresponding service
// tier. [Contact sales](https://openai.com/contact-sales) to learn more about
// Priority processing.
// '[priority](https://openai.com/api-priority-processing/)', then the request
// will be processed with the corresponding service tier.
// - When not set, the default behavior is 'auto'.
//
// When the `service_tier` parameter is set, the response body will include the
Expand All @@ -613,6 +612,8 @@ type ChatCompletionChunk struct {
// This fingerprint represents the backend configuration that the model runs with.
// Can be used in conjunction with the `seed` request parameter to understand when
// backend changes have been made that might impact determinism.
//
// Deprecated: deprecated
SystemFingerprint string `json:"system_fingerprint"`
// An optional field that will only be present when you set
// `stream_options: {"include_usage": true}` in your request. When present, it
Expand Down Expand Up @@ -815,9 +816,8 @@ func (r *ChatCompletionChunkChoiceLogprobs) UnmarshalJSON(data []byte) error {
// - If set to 'default', then the request will be processed with the standard
// pricing and performance for the selected model.
// - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
// 'priority', then the request will be processed with the corresponding service
// tier. [Contact sales](https://openai.com/contact-sales) to learn more about
// Priority processing.
// '[priority](https://openai.com/api-priority-processing/)', then the request
// will be processed with the corresponding service tier.
// - When not set, the default behavior is 'auto'.
//
// When the `service_tier` parameter is set, the response body will include the
Expand Down Expand Up @@ -3034,9 +3034,8 @@ type ChatCompletionNewParams struct {
// - If set to 'default', then the request will be processed with the standard
// pricing and performance for the selected model.
// - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
// 'priority', then the request will be processed with the corresponding service
// tier. [Contact sales](https://openai.com/contact-sales) to learn more about
// Priority processing.
// '[priority](https://openai.com/api-priority-processing/)', then the request
// will be processed with the corresponding service tier.
// - When not set, the default behavior is 'auto'.
//
// When the `service_tier` parameter is set, the response body will include the
Expand Down Expand Up @@ -3092,6 +3091,7 @@ type ChatCompletionNewParams struct {
// ensures the message the model generates is valid JSON. Using `json_schema` is
// preferred for models that support it.
ResponseFormat ChatCompletionNewParamsResponseFormatUnion `json:"response_format,omitzero"`
Text ChatCompletionNewParamsText `json:"text,omitzero"`
// Controls which (if any) tool is called by the model. `none` means the model will
// not call any tool and instead generates a message. `auto` means the model can
// pick between generating a message or calling one or more tools. `required` means
Expand Down Expand Up @@ -3242,9 +3242,8 @@ func (u ChatCompletionNewParamsResponseFormatUnion) GetType() *string {
// - If set to 'default', then the request will be processed with the standard
// pricing and performance for the selected model.
// - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
// 'priority', then the request will be processed with the corresponding service
// tier. [Contact sales](https://openai.com/contact-sales) to learn more about
// Priority processing.
// '[priority](https://openai.com/api-priority-processing/)', then the request
// will be processed with the corresponding service tier.
// - When not set, the default behavior is 'auto'.
//
// When the `service_tier` parameter is set, the response body will include the
Expand Down Expand Up @@ -3286,6 +3285,30 @@ func (u *ChatCompletionNewParamsStopUnion) asAny() any {
return nil
}

type ChatCompletionNewParamsText struct {
// Constrains the verbosity of the model's response. Lower values will result in
// more concise responses, while higher values will result in more verbose
// responses. Currently supported values are `low`, `medium`, and `high`.
//
// Any of "low", "medium", "high".
Verbosity string `json:"verbosity,omitzero"`
paramObj
}

func (r ChatCompletionNewParamsText) MarshalJSON() (data []byte, err error) {
type shadow ChatCompletionNewParamsText
return param.MarshalObject(r, (*shadow)(&r))
}
func (r *ChatCompletionNewParamsText) UnmarshalJSON(data []byte) error {
return apijson.UnmarshalRoot(data, r)
}

func init() {
apijson.RegisterFieldValidator[ChatCompletionNewParamsText](
"verbosity", "low", "medium", "high",
)
}

// Constrains the verbosity of the model's response. Lower values will result in
// more concise responses, while higher values will result in more verbose
// responses. Currently supported values are `low`, `medium`, and `high`.
Expand Down
3 changes: 3 additions & 0 deletions chatcompletion_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,9 @@ func TestChatCompletionNewWithOptionalParams(t *testing.T) {
IncludeUsage: openai.Bool(true),
},
Temperature: openai.Float(1),
Text: openai.ChatCompletionNewParamsText{
Verbosity: "low",
},
ToolChoice: openai.ChatCompletionToolChoiceOptionUnionParam{
OfAuto: openai.String("none"),
},
Expand Down
Loading