-
Notifications
You must be signed in to change notification settings - Fork 3.3k
No tool output found for function call when chaining responses. #46092
Copy link
Copy link
Open
Labels
customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-triageWorkflow: This is a new issue that needs to be triaged to the appropriate team.Workflow: This is a new issue that needs to be triaged to the appropriate team.questionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that
Description
- Package Name: azure.ai.projects.aio
- Package Version: 2.01
- Operating System: Windows
- Python Version: 3.12
Describe the bug
When passing a full conversation that includes tool calls/results to create a new response, the API will raise No tool output found for function call for a function call/result pair that already exists in the conversation.
To Reproduce
import asyncio
import json
import os
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import AzureCliCredential
from dotenv import load_dotenv
from openai import AsyncOpenAI
load_dotenv()
async def run_agent_1(client: AsyncOpenAI):
# Agent 1
instructions = "Call query_data exactly once with the user's topic. Return the raw result."
def query_data(
topic: str,
) -> str:
"""Query internal data about a topic."""
print(f" [query_data] topic={topic}")
return f"Data from default: Revenue for {topic} is $10M."
tools = [
{
"type": "function",
"name": "query_data",
"parameters": {
"type": "object",
"properties": {
"topic": {
"type": "string",
},
},
"required": ["topic"],
},
},
]
input_message = [{"role": "user", "content": "Summarize Q3 revenue."}]
conversation = input_message
agent_1_response_1 = await client.responses.create(
model="gpt-5.4",
input=input_message,
instructions=instructions,
tools=tools,
)
conversation += agent_1_response_1.output
for item in agent_1_response_1.output:
if item.type == "function_call" and item.name == "query_data":
topic = json.loads(item.arguments)["topic"]
data = query_data(topic)
print(f"Responding to function call {item.name} with call_id {item.call_id}")
input_message = [
{
"type": "function_call_output",
"call_id": item.call_id,
"output": data,
}
]
conversation.extend(input_message)
agent_1_response_2 = await client.responses.create(
model="gpt-5.4",
instructions=instructions,
tools=tools,
input=input_message,
previous_response_id=agent_1_response_1.id,
)
conversation += agent_1_response_2.output
print(agent_1_response_2.output_text)
return conversation
async def run_agent_2(client: AsyncOpenAI, conversation):
instructions = "Call get_style exactly once, then rewrite the data as a short report."
def get_style(format: str = "plain") -> str:
"""Get the output formatting style."""
print(f" [get_style] format={format}")
return f"Use {format} formatting with bullet points."
tools = [
{
"type": "function",
"name": "get_style",
"parameters": {
"type": "object",
"properties": {
"format": {
"type": "string",
},
},
"required": ["format"],
},
},
]
agent_2_response_1 = await client.responses.create(
model="gpt-5.4",
input=conversation,
instructions=instructions,
tools=tools,
)
for item in agent_2_response_1.output:
if item.type == "function_call" and item.name == "get_style":
format = json.loads(item.arguments)["format"]
style = get_style(format)
print(f"Responding to function call {item.name} with call_id {item.call_id}")
input_message = [
{
"type": "function_call_output",
"call_id": item.call_id,
"output": style,
}
]
agent_2_response_2 = await client.responses.create(
model="gpt-5.4",
instructions=instructions,
tools=tools,
input=input_message,
previous_response_id=agent_2_response_1.id,
)
print(agent_2_response_2.output_text)
async def main():
client = AIProjectClient(
endpoint=os.environ["FOUNDRY_PROJECT_ENDPOINT"],
model=os.environ["FOUNDRY_MODEL"],
credential=AzureCliCredential(),
).get_openai_client()
# # Error won't happen when pointing to OpenAI
# client = AsyncOpenAI(api_key=os.environ["OPENAI_API_KEY"])
print("=====Run from Agent 1=====")
conversation = await run_agent_1(client)
print("\n====Conversation====")
for item in conversation:
print(item)
print("\n=====Run from Agent 2=====")
await run_agent_2(client, conversation)
if __name__ == "__main__":
asyncio.run(main())Expected behavior
The code runs without error.
Additional context
This error cannot be reproduced consistently. I am able to get 7 or 8 times out of 10 runs. When pointing to the OpenAI service, this doesn't seem to happen at all.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-triageWorkflow: This is a new issue that needs to be triaged to the appropriate team.Workflow: This is a new issue that needs to be triaged to the appropriate team.questionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that