You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/llm_observability/_index.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,12 +22,12 @@ further_reading:
22
22
23
23
## Overview
24
24
25
-
With LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
25
+
With LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
26
26
27
27
Each request fulfilled by your application is represented as a trace on the [**LLM Observability** page][1] in Datadog.
28
28
29
29
{{< img src="llm_observability/traces.png" alt="A list of prompt-response pair traces on the LLM Observability page" style="width:100%;" >}}
30
-
30
+
31
31
A trace can represent:
32
32
33
33
- An individual LLM inference, including tokens, error information, and latency
@@ -62,12 +62,12 @@ Automatically scan and redact any sensitive data in your AI applications and ide
62
62
63
63
## See anomalies highlighted as insights
64
64
65
-
LLM Observability Insights provides a monitoring experience that helps users identify anomalies in their operational metrics—such as duration and error rate—and their out-of-the-box (OOTB) evaluations.
65
+
LLM Observability Insights provides a monitoring experience that helps users identify anomalies in their operational metrics—such as duration and error rate—and their [out-of-the-box (OOTB) evaluations][9].
66
66
67
67
Outlier detection is performed across key dimensions:
68
68
- Span name
69
69
- Workflow type
70
-
- Cluster input/output topics
70
+
-[Cluster input/output topics][10]
71
71
72
72
These outliers are analyzed over the past week and automatically surfaced in the corresponding time window selected by the user. This enables teams to proactively detect regressions, performance drifts, or unexpected behavior in their LLM applications.
73
73
@@ -97,3 +97,5 @@ See the [Setup documentation][5] for instructions on instrumenting your LLM appl
0 commit comments