From 0199fccf3320a671680cade526619a2bffe0403d Mon Sep 17 00:00:00 2001 From: jinatdatadog <97474042+jinatdatadog@users.noreply.github.com> Date: Thu, 17 Jul 2025 12:14:03 -0400 Subject: [PATCH 1/4] add logs and metrics docs --- content/en/ddsql_reference/_index.md | 40 ++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/content/en/ddsql_reference/_index.md b/content/en/ddsql_reference/_index.md index 791d2b87f88a9..d8f447b6d5cb6 100644 --- a/content/en/ddsql_reference/_index.md +++ b/content/en/ddsql_reference/_index.md @@ -33,6 +33,7 @@ This documentation covers the SQL support available and includes: - [SQL functions](#functions) - [Window functions](#window-functions) - [JSON functions](#json-functions-and-operators) +- [Table functions](#table-functions) - [Tags](#tags) @@ -453,6 +454,45 @@ This table provides an overview of the supprted window functions. For comprehens | json_extract_path_text(text json, text path…) | text | Extracts a JSON sub-object as text, defined by the path. Its behavior is equivalent to the [Postgres function with the same name][3]. For example, `json_extract_path_text(col, ‘forest')` returns the value of the key `forest` for each JSON object in `col`. See the example below for a JSON array syntax.| | json_extract_path(text json, text path…) | JSON | Same functionality as `json_extract_path_text`, but returns a column of JSON type instead of text type.| +## Table functions + +{{< callout url="https://www.datadoghq.com/product-preview/logs-metrics-support-in-ddsql-editor/" >}} +Querying Logs and Metrics through DDSQL is in Preview. Use this form to request access. +{{< /callout >}} + +Table functions are used to query Logs and Metrics + +| Function | Description | Example | +|---------------|----------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| +| `dd.logs( + filter => varchar, + columns => array < varchar >, + indexes ? => array < varchar >, + from_timestamp ? => timestamp, + to_timestamp ? => timestamp +) AS (column_name type [, ...])`
| Returns log data as a table. The columns parameter specifies which log fields to extract, and the AS clause defines the schema of the returned table. Optional: filtering by index or time range. When time isn’t specified, we default to the past 1 hour of data. | {{< code-block lang="sql" >}}SELECT timestamp, host, service, message +FROM dd.logs( + filter => 'source:java', + columns => ARRAY['timestamp','host', +'service','message'] +) AS ( + timestamp TIMESTAMP, + host VARCHAR, + service VARCHAR, + message VARCHAR +) {{< /code-block >}} | +| `dd.metric_scalar( + query varchar, + reducer varchar [, from_timestamp timestamp, to_timestamp timestamp] +)`
| Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (avg, max, etc.), and optional timestamp parameters (default 1 hour) to define the time range. | {{< code-block lang="sql" >}}SELECT * +FROM dd.metric_scalar( + 'avg:system.cpu.user{*} by {service}', + 'avg', + TIMESTAMP '2025-07-10 00:00:00.000-04:00', + TIMESTAMP '2025-07-17 00:00:00.000-04:00' + ) +ORDER BY value DESC; {{< /code-block >}} | + ## Tags DDSQL exposes tags as an `hstore` type, which you can query using the PostgreSQL arrow operator. For example: From 570061f977aad0561798c48bc494d21feeeb4fd8 Mon Sep 17 00:00:00 2001 From: jinatdatadog <97474042+jinatdatadog@users.noreply.github.com> Date: Thu, 17 Jul 2025 13:09:42 -0400 Subject: [PATCH 2/4] fix table --- content/en/ddsql_reference/_index.md | 35 ++++------------------------ 1 file changed, 5 insertions(+), 30 deletions(-) diff --git a/content/en/ddsql_reference/_index.md b/content/en/ddsql_reference/_index.md index d8f447b6d5cb6..c7a4bad0decd0 100644 --- a/content/en/ddsql_reference/_index.md +++ b/content/en/ddsql_reference/_index.md @@ -462,36 +462,11 @@ Querying Logs and Metrics through DDSQL is in Preview. Use this form to request Table functions are used to query Logs and Metrics -| Function | Description | Example | -|---------------|----------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| -| `dd.logs( - filter => varchar, - columns => array < varchar >, - indexes ? => array < varchar >, - from_timestamp ? => timestamp, - to_timestamp ? => timestamp -) AS (column_name type [, ...])`
| Returns log data as a table. The columns parameter specifies which log fields to extract, and the AS clause defines the schema of the returned table. Optional: filtering by index or time range. When time isn’t specified, we default to the past 1 hour of data. | {{< code-block lang="sql" >}}SELECT timestamp, host, service, message -FROM dd.logs( - filter => 'source:java', - columns => ARRAY['timestamp','host', -'service','message'] -) AS ( - timestamp TIMESTAMP, - host VARCHAR, - service VARCHAR, - message VARCHAR -) {{< /code-block >}} | -| `dd.metric_scalar( - query varchar, - reducer varchar [, from_timestamp timestamp, to_timestamp timestamp] -)`
| Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (avg, max, etc.), and optional timestamp parameters (default 1 hour) to define the time range. | {{< code-block lang="sql" >}}SELECT * -FROM dd.metric_scalar( - 'avg:system.cpu.user{*} by {service}', - 'avg', - TIMESTAMP '2025-07-10 00:00:00.000-04:00', - TIMESTAMP '2025-07-17 00:00:00.000-04:00' - ) -ORDER BY value DESC; {{< /code-block >}} | +| Function | Description | Example | +|----------|-------------|---------| +| {{< code-block lang="sql" >}}dd.logs(
  filter => varchar,
  columns => array < varchar >,
  indexes ? => array < varchar >,
  from_timestamp ? => timestamp,
  to_timestamp ? => timestamp
) AS (column_name type [, ...]){{< /code-block >}} | Returns log data as a table. The `columns` parameter specifies which log fields to extract, and the `AS` clause defines the schema of the returned table. Optional: filtering by index or time range. When time isn’t specified, it defaults to the past 1 hour of data. | {{< code-block lang="sql" >}}SELECT timestamp, host, service, message
FROM dd.logs(
  filter => 'source:java',
  columns => ARRAY['timestamp','host','service','message']
) AS (
  timestamp TIMESTAMP,
  host VARCHAR,
  service VARCHAR,
  message VARCHAR
){{< /code-block >}} | +| {{< code-block lang="sql" >}}dd.metric_scalar(
  query varchar,
  reducer varchar [, from_timestamp timestamp, to_timestamp timestamp]
){{< /code-block >}} | Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (`avg`, `max`, etc.), and optional timestamp parameters (default 1 hour) to define the time range. | {{< code-block lang="sql" >}}SELECT *
FROM dd.metric_scalar(
  'avg:system.cpu.user{*} by {service}',
  'avg',
  TIMESTAMP '2025-07-10 00:00:00.000-04:00',
  TIMESTAMP '2025-07-17 00:00:00.000-04:00'
)
ORDER BY value DESC;{{< /code-block >}} | + ## Tags From 3e2549d8454d9a83d1e34223bae2a5dd9a042d88 Mon Sep 17 00:00:00 2001 From: Michael Cretzman Date: Thu, 17 Jul 2025 11:27:28 -0700 Subject: [PATCH 3/4] fixing MD table formatting --- content/en/ddsql_reference/_index.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/content/en/ddsql_reference/_index.md b/content/en/ddsql_reference/_index.md index c7a4bad0decd0..2c9365984723b 100644 --- a/content/en/ddsql_reference/_index.md +++ b/content/en/ddsql_reference/_index.md @@ -464,8 +464,10 @@ Table functions are used to query Logs and Metrics | Function | Description | Example | |----------|-------------|---------| -| {{< code-block lang="sql" >}}dd.logs(
  filter => varchar,
  columns => array < varchar >,
  indexes ? => array < varchar >,
  from_timestamp ? => timestamp,
  to_timestamp ? => timestamp
) AS (column_name type [, ...]){{< /code-block >}} | Returns log data as a table. The `columns` parameter specifies which log fields to extract, and the `AS` clause defines the schema of the returned table. Optional: filtering by index or time range. When time isn’t specified, it defaults to the past 1 hour of data. | {{< code-block lang="sql" >}}SELECT timestamp, host, service, message
FROM dd.logs(
  filter => 'source:java',
  columns => ARRAY['timestamp','host','service','message']
) AS (
  timestamp TIMESTAMP,
  host VARCHAR,
  service VARCHAR,
  message VARCHAR
){{< /code-block >}} | -| {{< code-block lang="sql" >}}dd.metric_scalar(
  query varchar,
  reducer varchar [, from_timestamp timestamp, to_timestamp timestamp]
){{< /code-block >}} | Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (`avg`, `max`, etc.), and optional timestamp parameters (default 1 hour) to define the time range. | {{< code-block lang="sql" >}}SELECT *
FROM dd.metric_scalar(
  'avg:system.cpu.user{*} by {service}',
  'avg',
  TIMESTAMP '2025-07-10 00:00:00.000-04:00',
  TIMESTAMP '2025-07-17 00:00:00.000-04:00'
)
ORDER BY value DESC;{{< /code-block >}} | +| `dd.logs( filter => varchar, columns => array < varchar >, indexes ? => array < varchar >, from_timestamp ? => timestamp, to_timestamp ? => timestamp ) AS (column_name type [, ...])` | Returns log data as a table. The columns parameter specifies which log fields to extract, and the AS clause defines the schema of the returned table. Optional: filtering by index or time range. When time is not specified, we default to the past 1 hour of data. | ```sql SELECT timestamp, host, service, message FROM dd.logs( filter => 'source:java', columns => ARRAY['timestamp','host', 'service','message'] ) AS ( timestamp TIMESTAMP, host VARCHAR, service VARCHAR, message VARCHAR ) ``` | +| `dd.metric_scalar( query varchar, reducer varchar [, from_timestamp timestamp, to_timestamp timestamp] )` | Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (avg, max, etc.), and optional timestamp parameters (default 1 hour) to define the time range. | ```sql SELECT * FROM dd.metric_scalar( 'avg:system.cpu.user{*} by {service}', 'avg', TIMESTAMP '2025-07-10 00:00:00.000-04:00', TIMESTAMP '2025-07-17 00:00:00.000-04:00' ) ORDER BY value DESC; ``` | + + ## Tags From d697512b2c9b637fd8df47bde598d06c0a811b82 Mon Sep 17 00:00:00 2001 From: Michael Cretzman Date: Thu, 17 Jul 2025 12:45:55 -0700 Subject: [PATCH 4/4] convert MD table to HTML --- content/en/ddsql_reference/_index.md | 63 ++++++++++++++++++++++++++-- 1 file changed, 59 insertions(+), 4 deletions(-) diff --git a/content/en/ddsql_reference/_index.md b/content/en/ddsql_reference/_index.md index 2c9365984723b..40f49b7ef4c94 100644 --- a/content/en/ddsql_reference/_index.md +++ b/content/en/ddsql_reference/_index.md @@ -462,10 +462,65 @@ Querying Logs and Metrics through DDSQL is in Preview. Use this form to request Table functions are used to query Logs and Metrics -| Function | Description | Example | -|----------|-------------|---------| -| `dd.logs( filter => varchar, columns => array < varchar >, indexes ? => array < varchar >, from_timestamp ? => timestamp, to_timestamp ? => timestamp ) AS (column_name type [, ...])` | Returns log data as a table. The columns parameter specifies which log fields to extract, and the AS clause defines the schema of the returned table. Optional: filtering by index or time range. When time is not specified, we default to the past 1 hour of data. | ```sql SELECT timestamp, host, service, message FROM dd.logs( filter => 'source:java', columns => ARRAY['timestamp','host', 'service','message'] ) AS ( timestamp TIMESTAMP, host VARCHAR, service VARCHAR, message VARCHAR ) ``` | -| `dd.metric_scalar( query varchar, reducer varchar [, from_timestamp timestamp, to_timestamp timestamp] )` | Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (avg, max, etc.), and optional timestamp parameters (default 1 hour) to define the time range. | ```sql SELECT * FROM dd.metric_scalar( 'avg:system.cpu.user{*} by {service}', 'avg', TIMESTAMP '2025-07-10 00:00:00.000-04:00', TIMESTAMP '2025-07-17 00:00:00.000-04:00' ) ORDER BY value DESC; ``` | + + + + + + + + + + + + + + + + + + + + +
FunctionDescriptionExample
+
+dd.logs(
+    filter => varchar,
+    columns => array < varchar >,
+    indexes ? => array < varchar >,
+    from_timestamp ? => timestamp,
+    to_timestamp ? => timestamp
+) AS (column_name type [, ...])
+
Returns log data as a table. The columns parameter specifies which log fields to extract, and the AS clause defines the schema of the returned table. Optional: filtering by index or time range. When time is not specified, we default to the past 1 hour of data. + {{< code-block lang="sql" >}} +SELECT timestamp, host, service, message +FROM dd.logs( + filter => 'source:java', + columns => ARRAY['timestamp','host', 'service','message'] +) AS ( + timestamp TIMESTAMP, + host VARCHAR, + service VARCHAR, + message VARCHAR +){{< /code-block >}} +
+
+dd.metric_scalar(
+    query varchar,
+    reducer varchar [, from_timestamp timestamp, to_timestamp timestamp]
+)
+
Returns metric data as a scalar value. The function accepts a metrics query (with optional grouping), a reducer to determine how values are aggregated (avg, max, etc.), and optional timestamp parameters (default 1 hour) to define the time range. + {{< code-block lang="sql" >}} +SELECT * +FROM dd.metric_scalar( + 'avg:system.cpu.user{*} by {service}', + 'avg', + TIMESTAMP '2025-07-10 00:00:00.000-04:00', + TIMESTAMP '2025-07-17 00:00:00.000-04:00' +) +ORDER BY value DESC;{{< /code-block >}} +
+