Skip to content

merge from origin #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 261 commits into
base: v0.0.16-tool-call
Choose a base branch
from
Open

merge from origin #3

wants to merge 261 commits into from

Conversation

vcshih
Copy link

@vcshih vcshih commented Jul 21, 2025

No description provided.

Rehan-Ul-Haq and others added 30 commits May 30, 2025 12:14
### Overview

This PR fixes a small typo in the docstring of the
`is_strict_json_schema` abstract method of the `AgentOutputSchemaBase`
class in `agent_output.py`.

### Changes

- Corrected the word “valis” to “valid” in the docstring.

### Motivation

Clear and correct documentation improves code readability and reduces
confusion for users and contributors.

### Checklist

- [x] I have reviewed the docstring after making the change.
- [x] No functionality is affected.
- [x] The change follows the repository’s contribution guidelines.
People keep trying to fix this, but its a breaking change.
This pull request resolves #777; If you think we should introduce a new
item type for MCP call output, please let me know. As other hosted tools
use this event, I believe using the same should be good to go tho.
The EmbeddedResource from MCP tool call contains a field with type
AnyUrl that is not JSON-serializable. To avoid this exception, use
item.model_dump(mode="json") to ensure a JSON-serializable return value.
### Summary:
Towards #767. We were caching the list of tools for an agent, so if you
did `agent.tools.append(...)` from a tool call, the next call to the
model wouldn't include the new tool. THis is a bug.

### Test Plan:
Unit tests. Note that now MCP tools are listed each time the agent runs
(users can still cache the `list_tools` however).
Closes #796. Shouldn't start a busy waiting thread if there aren't any
traces.

Test plan
```
import threading
assert threading.active_count() == 1
import agents
assert threading.active_count() == 1
```
### Summary:
Allows a user to do `function_tool(is_enabled=<some_callable>)`; the
callable is called when the agent runs.

This allows you to dynamically enable/disable a tool based on the
context/env.

The meta-goal is to allow `Agent` to be effectively immutable. That
enables some nice things down the line, and this allows you to
dynamically modify the tools list without mutating the agent.

### Test Plan:
Unit tests
bump version
## Summary
- describe semantic versioning and release steps
- add release page to documentation nav

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`
- `make build-docs`


------
https://chatgpt.com/codex/tasks/task_i_68409d25afdc83218ad362d10c8a80a1
## Summary
- ensure `Handoff.get_transfer_message` emits valid JSON
- test transfer message validity

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`


------
https://chatgpt.com/codex/tasks/task_i_68432f925b048324a16878d28e850841
In deep agent workflows, each sub‐agent automatically performs an LLM
step to summarize its tool calls before returning to its parent. This
leads to:
1. Excessive latency: every nested agent invokes the LLM, compounding
delays.
2. Loss of raw tool data: summaries may strip out details the top‐level
agent needs.

We discovered that `Agent.as_tool(...)` already accepts an
(undocumented) `custom_output_extractor` parameter. By providing a
callback, a parent agent can override what the sub‐agent returns e.g.
hand back raw tool outputs or a custom slice so that only the final
agent does summarization.

---

This PR adds a “Custom output extraction” section to the Markdown docs
under “Agents as tools,” with a minimal code example.
This PR fixes issue:
#559

By adding the tool_call_id to the RunContextWrapper prior to calling
tools. This gives the ability to access the tool_call_id in the
implementation of the tool.
Sometimes users want to provide parameters specific to a model provider.
This is an escape hatch.
## Summary
- ensure `name_override` is always used in `function_schema`
- test name override when docstring info is disabled

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`

Resolves #860
------
https://chatgpt.com/codex/tasks/task_i_684f1cf885b08321b4dd3f4294e24ca2
I replaced the `timedelta` parameters for MCP timeouts with `float`
values, addressing issue #845 .

Given that the MCP official repository has incorporated these changes in
[this PR](modelcontextprotocol/python-sdk#941),
updating the MCP version in openai-agents and specifying the timeouts as
floats should be enough.
Add support for the new openai prompts feature.
Was added to function tools before, now handoffs. Towards #918
In the REPL utility, the final output was printed after the streaming,
causing a duplication issue.

I only removed the lines within the stream mode that printed the final
output. It should not affect any other use case of the utility.

It solves a comment done in #784 .
People soemtimes use claude code to send PRs, this allows the agents.md
to be shared for that
This pull request resolves #890 and enables the translation script to
accept a file to work on.
Like I did for the TypeScript SDK project (see
openai/openai-agents-js#97), we may want to have
the labels to exclude the issues from auto-closing. Some of the issues
that have been open for a while could just need time.
### Summary
This PR fixes a duplicated line in `__init__.py`

### Checks
- [x] I've added/updated the relevant documentation
- [x] I've run `make lint` and `make format`

Co-authored-by: easonsshi <[email protected]>
…ices is empty (#935)

This pull request resolves #604; just checking the existence of the data and avoiding the runtime exception should make sense.
### Summary

This pull request fixes [issue
#892](#892) by
adding a missing docstring to the `fetch_user_age` tool function in
`docs/context.md`.

### Problem

Many non-OpenAI LLMs (such as Claude, Gemini, Mistral, etc.) are unable
to use the `fetch_user_age` function because it lacks a docstring. As a
result, they return responses like:

> "I cannot determine the user's age because the available tools lack
the ability to fetch user-specific information."

### Changes Made

- Added a one-line docstring to the `fetch_user_age` function
- Improved return statement to match expected tool output

```python
@function_tool
async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str:
    """Fetch the age of the user. Call this function to get user's age information."""
    return f"The user {wrapper.context.name} is 47 years old"
rm-openai and others added 30 commits August 15, 2025 19:22
### Summary

Adds `is_enabled` parameter to `Agent.as_tool()` method for
conditionally enabling/disabling agent tools at runtime. Supports
boolean values and callable functions for dynamic tool filtering
  in multi-agent orchestration.

  ### Test plan

  - Added unit tests in `tests/test_agent_as_tool.py`
- Added example in
`examples/agent_patterns/agents_as_tools_conditional.py`
  - Updated documentation in `docs/tools.md`
  - All tests pass

  ### Issue number

  Closes #1097

  ### Checks

  - [x] I've added new tests (if relevant)
  - [x] I've added/updated the relevant documentation
  - [x] I've run `make lint` and `make format`
  - [x] I've made sure tests pass

---------

Co-authored-by: thein <[email protected]>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Updating external tracing processor lists and added Agenta and linked a
tutorial for integration
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
## Summary
- allow configuring retries with exponential backoff when listing tools
or calling a tool on an MCP server via `max_retry_attempts` (supporting
`-1` for unlimited retries) and `retry_backoff_seconds_base`
- propagate the retry parameters through the stdio, SSE, and streamable
HTTP server implementations so callers can tune retries when
constructing these servers
- test that `call_tool` and `list_tools` retry appropriately

## Testing
- `make lint`
- `make mypy`
- `make test` *(fails: No rule to make target 'test')*
- `make tests`
- `make old_version_tests` *(fails: Request failed after 3 retries
(tunnel error))*

------
https://chatgpt.com/codex/tasks/task_i_68a73cab8b9c8321876f6f3dd1dfcd20
…ce (#1548)

For every event a new TypeAdapter is created, which has a significant
performance impact.
<img width="1412" height="1166" alt="image"
src="https://github.com/user-attachments/assets/fc1f9629-9342-4347-b4d5-5e218b73c4e8"
/>
Creating it once and reusing it, makes event handling a lot faster.

Co-authored-by: Michiel De Witte <[email protected]>
Documentation for SQLAlchemy-powered sessions, to be merged after
merging and releasing #1357
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.