From 492c8001a7704e0fa88e7675061776d9d1c1570e Mon Sep 17 00:00:00 2001 From: Cursor Agent Date: Mon, 4 Aug 2025 22:54:32 +0000 Subject: [PATCH 1/3] Add OpenAI Agents support to OpenLLMetry tracing documentation Co-authored-by: 1sebastian1sosa1 <1sebastian1sosa1@gmail.com> --- openllmetry/tracing/openai-agents.mdx | 53 +++++++++++++++++++++++++++ openllmetry/tracing/supported.mdx | 1 + 2 files changed, 54 insertions(+) create mode 100644 openllmetry/tracing/openai-agents.mdx diff --git a/openllmetry/tracing/openai-agents.mdx b/openllmetry/tracing/openai-agents.mdx new file mode 100644 index 0000000..4ee9630 --- /dev/null +++ b/openllmetry/tracing/openai-agents.mdx @@ -0,0 +1,53 @@ +--- + +title: "OpenAI Agents" +description: "Automatic tracing for the openai-agents package" +--- + +OpenLLMetry automatically instruments the official `openai-agents` package, allowing you to monitor agent workflows without any additional code changes. + + + +```python Python +from dotenv import load_dotenv +from traceloop.sdk import Traceloop +from agents import Agent, Runner, function_tool + +load_dotenv() + +# Initialise tracing once in your application +Traceloop.init( + app_name="minimal_openai_agents_example", + # Disable batching so you can see traces immediately during development + disable_batch=True, +) + +@function_tool +def multiply_numbers(a: float, b: float) -> float: + """Simple multiplication calculator tool.""" + return a * b + +simple_agent = Agent( + model="gpt-4o-mini", + name="Simple Agent", + instructions="You are a helpful assistant. Answer questions clearly and concisely.", + tools=[multiply_numbers], +) + +result = Runner.run_sync(simple_agent, "Calculate 150 × 2 then multiply by 3") +print(result.final_output) +``` + + + +> **Tip** +> No decorators or special wrappers are needed – every call made by the agent, its tools and the underlying OpenAI models is captured automatically. + +The resulting trace will show a clear hierarchy of spans for the agent run, tool invocations and model calls: + + + + + + +That's it! For additional configuration options (such as exporting to a different backend or tweaking batching behaviour) see the [Python getting-started guide](/openllmetry/getting-started-python). diff --git a/openllmetry/tracing/supported.mdx b/openllmetry/tracing/supported.mdx index c333e3b..f0bd4bd 100644 --- a/openllmetry/tracing/supported.mdx +++ b/openllmetry/tracing/supported.mdx @@ -50,3 +50,4 @@ In the meantime, you can still use OpenLLMetry to report the [LLM and vector DB | [Haystack by deepset](https://haystack.deepset.ai/) | ✅ | ❌ | | [Langchain](https://www.langchain.com/) | ✅ | ✅ | | [LlamaIndex](https://www.llamaindex.ai/) | ✅ | ✅ | +| [OpenAI Agents](https://github.com/openai/openai-agents) | ✅ | ⏳ | From 1191cb07f00fa7bedddd0e84d3d58a9164fec927 Mon Sep 17 00:00:00 2001 From: Sebastian Sosa <1sebastian1sosa1@gmail.com> Date: Mon, 4 Aug 2025 16:24:26 -0700 Subject: [PATCH 2/3] update support for openai-agents framework & backlink llm framework mentions --- monitoring/introduction.mdx | 2 +- openllmetry/getting-started-nextjs.mdx | 2 +- openllmetry/getting-started-python.mdx | 2 +- openllmetry/getting-started-ts.mdx | 2 +- openllmetry/introduction.mdx | 2 +- openllmetry/tracing/annotations.mdx | 2 +- openllmetry/tracing/openai-agents.mdx | 53 -------------------------- openllmetry/tracing/supported.mdx | 2 +- 8 files changed, 7 insertions(+), 60 deletions(-) delete mode 100644 openllmetry/tracing/openai-agents.mdx diff --git a/monitoring/introduction.mdx b/monitoring/introduction.mdx index 82bd21f..961b090 100644 --- a/monitoring/introduction.mdx +++ b/monitoring/introduction.mdx @@ -6,7 +6,7 @@ description: "Detect hallucinations and regressions in the quality of your LLMs" One of the key features of Traceloop is the ability to monitor the quality of your LLM outputs. It helps you to detect hallucinations and regressions in the quality of your models and prompts. To start monitoring your LLM outputs, make sure you installed OpenLLMetry and configured it to send data to Traceloop. If you haven't done that yet, you can follow the instructions in the [Getting Started](/openllmetry/getting-started) guide. -Next, if you're not using a framework like LangChain or LlamaIndex, [make sure to annotate workflows and tasks](/openllmetry/tracing/decorators). +Next, if you're not using a [supported LLM framework](/openllmetry/tracing/supported#frameworks), [make sure to annotate workflows and tasks](/openllmetry/tracing/decorators). You can then define any of the following [monitors](https://app.traceloop.com/monitors/prd) to track the quality of your LLM outputs. diff --git a/openllmetry/getting-started-nextjs.mdx b/openllmetry/getting-started-nextjs.mdx index 1652dd6..31ba6df 100644 --- a/openllmetry/getting-started-nextjs.mdx +++ b/openllmetry/getting-started-nextjs.mdx @@ -175,7 +175,7 @@ Assume you have a function that renders a prompt and calls an LLM, simply wrap i We also have compatible Typescript decorators for class methods which are more convenient. - If you're using an LLM framework like Haystack, Langchain or LlamaIndex - + If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) - we'll do that for you. No need to add any annotations to your code. diff --git a/openllmetry/getting-started-python.mdx b/openllmetry/getting-started-python.mdx index 3c13150..577932b 100644 --- a/openllmetry/getting-started-python.mdx +++ b/openllmetry/getting-started-python.mdx @@ -58,7 +58,7 @@ Assume you have a function that renders a prompt and calls an LLM, simply add `@ - If you're using an LLM framework like Haystack, Langchain or LlamaIndex - + If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) - we'll do that for you. No need to add any annotations to your code. diff --git a/openllmetry/getting-started-ts.mdx b/openllmetry/getting-started-ts.mdx index 069156c..14309b8 100644 --- a/openllmetry/getting-started-ts.mdx +++ b/openllmetry/getting-started-ts.mdx @@ -73,7 +73,7 @@ Assume you have a function that renders a prompt and calls an LLM, simply wrap i We also have compatible Typescript decorators for class methods which are more convenient. - If you're using an LLM framework like Haystack, Langchain or LlamaIndex - + If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) - we'll do that for you. No need to add any annotations to your code. diff --git a/openllmetry/introduction.mdx b/openllmetry/introduction.mdx index 7ebf3d6..6b90e29 100644 --- a/openllmetry/introduction.mdx +++ b/openllmetry/introduction.mdx @@ -12,7 +12,7 @@ Tracing is done in a non-intrusive way, built on top of OpenTelemetry. You can choose to export the traces to Traceloop, or to your existing observability stack. - You can use OpenLLMetry whether you use a framework like LangChain, or + You can use OpenLLMetry whether you use a [supported LLM framework](/openllmetry/tracing/supported#frameworks), or directly interact with a foundation model API. diff --git a/openllmetry/tracing/annotations.mdx b/openllmetry/tracing/annotations.mdx index a2bb252..923fe00 100644 --- a/openllmetry/tracing/annotations.mdx +++ b/openllmetry/tracing/annotations.mdx @@ -11,7 +11,7 @@ description: "Enrich your traces by annotating chains and workflows in your app" Traceloop SDK supports several ways to annotate workflows, tasks, agents and tools in your code to get a more complete picture of your app structure. - If you're using a framework like Langchain, Haystack or LlamaIndex - no need + If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) - no need to do anything! OpenLLMetry will automatically detect the framework and annotate your traces. diff --git a/openllmetry/tracing/openai-agents.mdx b/openllmetry/tracing/openai-agents.mdx deleted file mode 100644 index 4ee9630..0000000 --- a/openllmetry/tracing/openai-agents.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- - -title: "OpenAI Agents" -description: "Automatic tracing for the openai-agents package" ---- - -OpenLLMetry automatically instruments the official `openai-agents` package, allowing you to monitor agent workflows without any additional code changes. - - - -```python Python -from dotenv import load_dotenv -from traceloop.sdk import Traceloop -from agents import Agent, Runner, function_tool - -load_dotenv() - -# Initialise tracing once in your application -Traceloop.init( - app_name="minimal_openai_agents_example", - # Disable batching so you can see traces immediately during development - disable_batch=True, -) - -@function_tool -def multiply_numbers(a: float, b: float) -> float: - """Simple multiplication calculator tool.""" - return a * b - -simple_agent = Agent( - model="gpt-4o-mini", - name="Simple Agent", - instructions="You are a helpful assistant. Answer questions clearly and concisely.", - tools=[multiply_numbers], -) - -result = Runner.run_sync(simple_agent, "Calculate 150 × 2 then multiply by 3") -print(result.final_output) -``` - - - -> **Tip** -> No decorators or special wrappers are needed – every call made by the agent, its tools and the underlying OpenAI models is captured automatically. - -The resulting trace will show a clear hierarchy of spans for the agent run, tool invocations and model calls: - - - - - - -That's it! For additional configuration options (such as exporting to a different backend or tweaking batching behaviour) see the [Python getting-started guide](/openllmetry/getting-started-python). diff --git a/openllmetry/tracing/supported.mdx b/openllmetry/tracing/supported.mdx index f0bd4bd..b72d53c 100644 --- a/openllmetry/tracing/supported.mdx +++ b/openllmetry/tracing/supported.mdx @@ -50,4 +50,4 @@ In the meantime, you can still use OpenLLMetry to report the [LLM and vector DB | [Haystack by deepset](https://haystack.deepset.ai/) | ✅ | ❌ | | [Langchain](https://www.langchain.com/) | ✅ | ✅ | | [LlamaIndex](https://www.llamaindex.ai/) | ✅ | ✅ | -| [OpenAI Agents](https://github.com/openai/openai-agents) | ✅ | ⏳ | +| [OpenAI Agents](https://github.com/openai/openai-agents-python) | ✅ | ❌ | From 6fb8a6f46b4e72058108ae0a899967c0977ba1db Mon Sep 17 00:00:00 2001 From: Sebastian Sosa <1sebastian1sosa1@gmail.com> Date: Mon, 4 Aug 2025 16:35:40 -0700 Subject: [PATCH 3/3] Update monitoring/introduction.mdx This is correct Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --- monitoring/introduction.mdx | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/monitoring/introduction.mdx b/monitoring/introduction.mdx index 961b090..22c9d75 100644 --- a/monitoring/introduction.mdx +++ b/monitoring/introduction.mdx @@ -6,8 +6,7 @@ description: "Detect hallucinations and regressions in the quality of your LLMs" One of the key features of Traceloop is the ability to monitor the quality of your LLM outputs. It helps you to detect hallucinations and regressions in the quality of your models and prompts. To start monitoring your LLM outputs, make sure you installed OpenLLMetry and configured it to send data to Traceloop. If you haven't done that yet, you can follow the instructions in the [Getting Started](/openllmetry/getting-started) guide. -Next, if you're not using a [supported LLM framework](/openllmetry/tracing/supported#frameworks), [make sure to annotate workflows and tasks](/openllmetry/tracing/decorators). - +Next, if you're not using a [supported LLM framework](/openllmetry/tracing/supported#frameworks), [make sure to annotate workflows and tasks](/openllmetry/tracing/annotations). You can then define any of the following [monitors](https://app.traceloop.com/monitors/prd) to track the quality of your LLM outputs.