diff --git a/monitoring/introduction.mdx b/monitoring/introduction.mdx
index 82bd21f..22c9d75 100644
--- a/monitoring/introduction.mdx
+++ b/monitoring/introduction.mdx
@@ -6,8 +6,7 @@ description: "Detect hallucinations and regressions in the quality of your LLMs"
One of the key features of Traceloop is the ability to monitor the quality of your LLM outputs. It helps you to detect hallucinations and regressions in the quality of your models and prompts.
To start monitoring your LLM outputs, make sure you installed OpenLLMetry and configured it to send data to Traceloop. If you haven't done that yet, you can follow the instructions in the [Getting Started](/openllmetry/getting-started) guide.
-Next, if you're not using a framework like LangChain or LlamaIndex, [make sure to annotate workflows and tasks](/openllmetry/tracing/decorators).
-
+Next, if you're not using a [supported LLM framework](/openllmetry/tracing/supported#frameworks), [make sure to annotate workflows and tasks](/openllmetry/tracing/annotations).
You can then define any of the following [monitors](https://app.traceloop.com/monitors/prd) to track the quality of your LLM outputs.
diff --git a/openllmetry/getting-started-nextjs.mdx b/openllmetry/getting-started-nextjs.mdx
index 1652dd6..31ba6df 100644
--- a/openllmetry/getting-started-nextjs.mdx
+++ b/openllmetry/getting-started-nextjs.mdx
@@ -175,7 +175,7 @@ Assume you have a function that renders a prompt and calls an LLM, simply wrap i
We also have compatible Typescript decorators for class methods which are more convenient.
- If you're using an LLM framework like Haystack, Langchain or LlamaIndex -
+ If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) -
we'll do that for you. No need to add any annotations to your code.
diff --git a/openllmetry/getting-started-python.mdx b/openllmetry/getting-started-python.mdx
index 3c13150..577932b 100644
--- a/openllmetry/getting-started-python.mdx
+++ b/openllmetry/getting-started-python.mdx
@@ -58,7 +58,7 @@ Assume you have a function that renders a prompt and calls an LLM, simply add `@
- If you're using an LLM framework like Haystack, Langchain or LlamaIndex -
+ If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) -
we'll do that for you. No need to add any annotations to your code.
diff --git a/openllmetry/getting-started-ts.mdx b/openllmetry/getting-started-ts.mdx
index 069156c..14309b8 100644
--- a/openllmetry/getting-started-ts.mdx
+++ b/openllmetry/getting-started-ts.mdx
@@ -73,7 +73,7 @@ Assume you have a function that renders a prompt and calls an LLM, simply wrap i
We also have compatible Typescript decorators for class methods which are more convenient.
- If you're using an LLM framework like Haystack, Langchain or LlamaIndex -
+ If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) -
we'll do that for you. No need to add any annotations to your code.
diff --git a/openllmetry/introduction.mdx b/openllmetry/introduction.mdx
index 7ebf3d6..6b90e29 100644
--- a/openllmetry/introduction.mdx
+++ b/openllmetry/introduction.mdx
@@ -12,7 +12,7 @@ Tracing is done in a non-intrusive way, built on top of OpenTelemetry.
You can choose to export the traces to Traceloop, or to your existing observability stack.
- You can use OpenLLMetry whether you use a framework like LangChain, or
+ You can use OpenLLMetry whether you use a [supported LLM framework](/openllmetry/tracing/supported#frameworks), or
directly interact with a foundation model API.
diff --git a/openllmetry/tracing/annotations.mdx b/openllmetry/tracing/annotations.mdx
index a2bb252..923fe00 100644
--- a/openllmetry/tracing/annotations.mdx
+++ b/openllmetry/tracing/annotations.mdx
@@ -11,7 +11,7 @@ description: "Enrich your traces by annotating chains and workflows in your app"
Traceloop SDK supports several ways to annotate workflows, tasks, agents and tools in your code to get a more complete picture of your app structure.
- If you're using a framework like Langchain, Haystack or LlamaIndex - no need
+ If you're using a [supported LLM framework](/openllmetry/tracing/supported#frameworks) - no need
to do anything! OpenLLMetry will automatically detect the framework and
annotate your traces.
diff --git a/openllmetry/tracing/supported.mdx b/openllmetry/tracing/supported.mdx
index c333e3b..b72d53c 100644
--- a/openllmetry/tracing/supported.mdx
+++ b/openllmetry/tracing/supported.mdx
@@ -50,3 +50,4 @@ In the meantime, you can still use OpenLLMetry to report the [LLM and vector DB
| [Haystack by deepset](https://haystack.deepset.ai/) | ✅ | ❌ |
| [Langchain](https://www.langchain.com/) | ✅ | ✅ |
| [LlamaIndex](https://www.llamaindex.ai/) | ✅ | ✅ |
+| [OpenAI Agents](https://github.com/openai/openai-agents-python) | ✅ | ❌ |