IS NOT NULL;
+\`\`\`
+
+:::note
+
+\`NULL\` values still occupy disk space.
+
+:::
+
+## The UUID type
+
+QuestDB natively supports the \`UUID\` type, which should be used for \`UUID\`
+columns instead of storing \`UUIDs\` as \`strings\`. \`UUID\` columns are internally
+stored as 128-bit integers, allowing more efficient performance particularly in
+filtering and sorting. Strings inserted into a \`UUID\` column is permitted but
+the data will be converted to the \`UUID\` type.
+
+\`\`\`questdb-sql title="Inserting strings into a UUID column"
+CREATE TABLE my_table (
+ id UUID
+);
+[...]
+INSERT INTO my_table VALUES ('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11');
+[...]
+SELECT * FROM my_table WHERE id = 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11';
+\`\`\`
+
+If you use the [PostgreSQL Wire Protocol](/docs/reference/api/postgres/) then
+you can use the \`uuid\` type in your queries. The JDBC API does not distinguish
+the UUID type, but the Postgres JDBC driver supports it in prepared statements:
+
+\`\`\`java
+UUID uuid = UUID.randomUUID();
+PreparedStatement ps = connection.prepareStatement("INSERT INTO my_table VALUES (?)");
+ps.setObject(1, uuid);
+\`\`\`
+
+[QuestDB Client Libraries](/docs/ingestion-overview/#first-party-clients) can
+send \`UUIDs\` as \`strings\` to be converted to UUIDs by the server.
+
+## IPv4
+
+QuestDB supports the IPv4 data type. It has validity checks and some
+IPv4-specific functions.
+
+IPv4 addresses exist within the range of \`0.0.0.1\` - \`255.255.255.255\`.
+
+An all-zero address - \`0.0.0.0\` - is interpreted as \`NULL\`.
+
+Create a column with the IPv4 data type like this:
+
+\`\`\`sql
+-- Creating a table named traffic with two ipv4 columns: src and dst.
+CREATE TABLE traffic (ts timestamp, src ipv4, dst ipv4) timestamp(ts) PARTITION BY DAY;
+\`\`\`
+
+IPv4 addresses support a wide range of existing SQL functions, and there are
+some operators specifically for them. For a full list, see
+[IPv4 Operators](/docs/reference/operators/ipv4/).
+
+### Limitations
+
+You cannot auto-create an IPv4 column using the InfluxDB Line Protocol, since it
+doesn't support this type explicitly. The QuestDB server cannot distinguish
+between string and IPv4 data. However, you can insert IPv4 data into a
+pre-existing IPv4 column by sending IPs as strings.
+`
+ },
+ {
+ path: "sql/declare.md",
+ title: "DECLARE keyword",
+ headers: ["Syntax", "Mechanics", "Limitations"],
+ content: `\`DECLARE\` specifies a series of variable bindings used throughout your query.
+
+This syntax is supported within \`SELECT\` queries.
+
+## Syntax
+
+
+
+## Mechanics
+
+The \`DECLARE\` keyword comes before the \`SELECT\` clause in your query:
+
+\`\`\`questdb-sql title="Basic DECLARE" demo
+DECLARE
+ @x := 5
+SELECT @x;
+\`\`\`
+
+Use the variable binding operator \`:=\` (walrus) to associate expressions to names.
+
+:::tip
+
+It is easy to accidentally omit the \`:\` when writing variable binding expressions.
+
+Don't confuse the \`:=\` operator with a simple equality \`=\`!
+
+You should see an error message like this:
+> expected variable assignment operator \`:=\`
+
+:::
+
+The above example declares a single binding, which states that the variable \`@x\` is replaced with the constant integer \`5\`.
+
+The variables are resolved at parse-time, meaning that the variable is no longer present
+when the query is compiled.
+
+So the above example reduces to this simple query:
+
+\`\`\`questdb-sql title="basic DECLARE post-reduction" demo
+SELECT 5;
+\`\`\`
+
+| 5 |
+|---|
+| 5 |
+
+
+### Multiple bindings
+
+To declare multiple variables, set the bind expressions with commas \`,\`:
+
+\`\`\`questdb-sql title="Multiple variable bindings" demo
+DECLARE
+ @x := 5,
+ @y := 2
+SELECT @x + @y;
+\`\`\`
+
+| column |
+|--------|
+| 7 |
+
+### Variables as functions
+
+A variable need not be just a constant. It could also be a function call,
+and variables with function values can be nested:
+
+\`\`\`questdb-sql title="declaring function variable" demo
+DECLARE
+ @today := today(),
+ @start := interval_start(@today),
+ @end := interval_end(@today)
+SELECT @today = interval(@start, @end);
+\`\`\`
+
+| column |
+|--------|
+| true |
+
+
+### Declarations in subqueries
+
+Declarations made in parent queries are available in subqueries.
+
+\`\`\`questdb-sql title="variable shadowing" demo
+DECLARE
+ @x := 5
+SELECT y FROM (
+ SELECT @x AS y
+);
+\`\`\`
+
+| y |
+|---|
+| 5 |
+
+#### Shadowing
+
+If a subquery declares a variable of the same name, then the variable is shadowed
+and takes on the new value.
+
+However, any queries above this subquery are unaffected - the
+variable bind is not globally mutated.
+
+\`\`\`questdb-sql title="variable shadowing" demo
+DECLARE
+ @x := 5
+SELECT @x + y FROM (
+ DECLARE @x := 10
+ SELECT @x AS y
+);
+\`\`\`
+
+| column |
+|--------|
+| 15 |
+
+### Declarations as subqueries
+
+Declarations themselves can be subqueries.
+
+We suggest that this is not overused, as removing the subquery definition from its execution
+location may make queries harder to debug.
+
+Nevertheless, it is possible to define a variable as a subquery:
+
+\`\`\`questdb-sql title="table cursor as a variable" demo
+DECLARE
+ @subquery := (SELECT timestamp FROM trades)
+SELECT * FROM @subquery;
+\`\`\`
+
+You can even use already-declared variables to define your subquery variable:
+
+\`\`\`questdb-sql title="nesting decls inside decl subqueries" demo
+DECLARE
+ @timestamp := timestamp,
+ @symbol := symbol,
+ @subquery := (SELECT @timestamp, @symbol FROM trades)
+SELECT * FROM @subquery;
+\`\`\`
+
+### Declarations in CTEs
+
+Naturally, \`DECLARE\` also works with CTEs:
+
+\`\`\`questdb-sql title="declarations inside CTEs" demo
+DECLARE
+ @x := 5
+WITH first AS (
+ DECLARE @x := 10
+ SELECT @x as a -- a = 10
+),
+second AS (
+ DECLARE @y := 4
+ SELECT
+ @x + @y as b, -- b = 5 + 4 = 9
+ a -- a = 10
+ FROM first
+)
+SELECT a, b
+FROM second;
+\`\`\`
+
+| a | b |
+|----|---|
+| 10 | 9 |
+
+
+### Bind variables
+
+\`DECLARE\` syntax will work with prepared statements over PG Wire, so long as the client library
+does not perform syntax validation that rejects the \`DECLARE\` syntax:
+
+\`\`\`questdb-sql
+DECLARE @x := ?, @y := ?
+SELECT @x::int + @y::int;
+
+-- Then bind the following values: (1, 2)
+\`\`\`
+
+| column |
+|--------|
+| 3 |
+
+This can be useful to minimise repeated bind variables.
+
+For example, rather than passing the same value to multiple positional arguments,
+you could instead use a declared variable and send a single bind variable:
+
+
+\`\`\`questdb-sql
+-- instead of this:
+SELECT ? as name, id FROM users WHERE name = ?;
+
+-- do this:
+DECLARE @name := ?
+SELECT @name as name, id FROM users WHERE name = @name;
+\`\`\`
+Or for repeating columns:
+
+\`\`\`questdb-sql
+DECLARE
+ @col = ?,
+ @symbol = ?
+SELECT avg(@col), min(@col), max(@col)
+FROM trades
+WHERE symbol = @symbol;
+\`\`\`
+## Limitations
+
+Most basic expressions are supported, and we provide examples later in this document.
+
+We suggest you use variables to simplify repeated constants within your code, and minimise
+how many places you need to update the constant.
+
+### Disallowed expressions
+
+However, not all expressions are supported. The following are explicitly disallowed:
+
+#### Bracket lists
+
+\`\`\`questdb-sql title="bracket lists are not allowed"
+DECLARE
+ @symbols := ('BTC-USD', 'ETH-USD')
+SELECT timestamp, price, symbol
+FROM trades
+WHERE symbol IN @symbols;
+
+-- error: unexpected bind expression - bracket lists not supported
+\`\`\`
+
+#### SQL statement fragments
+
+\`\`\`questdb-sql title="sql fragments are not allowed"
+DECLARE
+ @x := FROM trades
+SELECT 5 @x;
+
+-- table and column names that are SQL keywords have to be enclosed in double quotes, such as "FROM"\`\`\`
+\`\`\`
+
+### Language client support
+
+Some language SQL clients do not allow identifiers to be passed as if it was a normal value. One example is \`psycopg\`.
+In this case, you should use an alternate API to splice in identifiers, for example:
+
+
+\`\`\`python title="psycopg"
+cur.execute(
+ sql.SQL("""
+ DECLARE @col := {}
+ SELECT max(@col), min(@col), avg(price)
+ FROM btc_trades;
+ """).format(sql.Identifier('price')))
+\`\`\`
+
+## Examples
+
+### SAMPLE BY
+
+\`\`\`questdb-sql title="DECLARE with SAMPLE BY" demo
+DECLARE
+ @period := 1m,
+ @window := '2024-11-25',
+ @symbol := 'ETH-USD'
+SELECT
+ timestamp, symbol, side, sum(amount) as volume
+FROM trades
+WHERE side = 'sell'
+AND timestamp IN @window
+AND symbol = @symbol
+SAMPLE BY @period
+FILL(NULL);
+\`\`\`
+
+| timestamp | symbol | side | volume |
+|-----------------------------|---------|------|------------------|
+| 2024-11-25T00:00:00.000000Z | ETH-USD | sell | 153.470574999999 |
+| 2024-11-25T00:01:00.000000Z | ETH-USD | sell | 298.927738 |
+| 2024-11-25T00:02:00.000000Z | ETH-USD | sell | 66.253058 |
+| ... | ... | ... | ... |
+
+### INSERT INTO SELECT
+
+\`\`\`questdb-sql
+INSERT INTO trades (timestamp, symbol)
+SELECT * FROM
+(
+ DECLARE
+ @x := now(),
+ @y := 'ETH-USD'
+ SELECT @x as timestamp, @y as symbol
+);
+\`\`\`
+
+### CREATE TABLE AS SELECT
+
+\`\`\`questdb-sql
+CREATE TABLE trades AS (
+ DECLARE
+ @x := now(),
+ @y := 'ETH-USD'
+ SELECT @x as timestamp, @y as symbol, 123 as price
+);
+\`\`\`
+
+`
+ },
+ {
+ path: "sql/distinct.md",
+ title: "DISTINCT keyword",
+ headers: ["Syntax"],
+ content: `\`SELECT DISTINCT\` is used to return only distinct (i.e different) values from a
+column as part of a [SELECT statement](/docs/reference/sql/select/).
+
+## Syntax
+
+
+
+## Examples
+
+The following query will return a list of all unique ratings in the table.
+
+\`\`\`questdb-sql title="Simple query"
+SELECT DISTINCT movieId
+FROM ratings;
+\`\`\`
+
+SELECT DISTINCT can be used in conjunction with more advanced queries and
+filters.
+
+\`\`\`questdb-sql title="With aggregate"
+SELECT DISTINCT movieId, count()
+FROM ratings;
+\`\`\`
+
+\`\`\`questdb-sql title="With filter"
+SELECT DISTINCT movieId, count()
+FROM ratings
+WHERE score > 3;
+\`\`\`
+`
+ },
+ {
+ path: "sql/drop-mat-view.md",
+ title: "DROP MATERIALIZED VIEW",
+ headers: ["Syntax", "IF EXISTS", "See also"],
+ content: `:::info
+
+Materialized View support is now generally available (GA) and ready for production use.
+
+If you are using versions earlier than \`8.3.1\`, we suggest you upgrade at your earliest convenience.
+
+:::
+
+\`DROP MATERIALIZED VIEW\` permanently deletes a materialized view and its
+contents.
+
+The deletion is **permanent** and **not recoverable**, except if the view was
+created in a non-standard volume. In such cases, the view is only logically
+removed while the underlying data remains intact in its volume.
+
+Disk space is reclaimed asynchronously after the materialized view is dropped.
+
+Existing read queries for this view may delay space reclamation.
+
+## Syntax
+
+
+
+## Example
+
+\`\`\`questdb-sql
+DROP MATERIALIZED VIEW trades_1h;
+\`\`\`
+
+## IF EXISTS
+
+Add an optional \`IF EXISTS\` clause after the \`DROP MATERIALIZED VIEW\` keywords
+to indicate that the selected materialized view should be dropped, but only if
+it exists.
+
+## See also
+
+For more information on the concept, see the the
+[introduction](/docs/concept/mat-views/) and [guide](/docs/guides/mat-views/) on
+materialized views.
+`
+ },
+ {
+ path: "sql/drop.md",
+ title: "DROP TABLE keyword",
+ headers: ["Syntax", "Description", "See also"],
+ content: `\`DROP TABLE\` permanently deletes a table and its contents. \`DROP ALL TABLES\`
+permanently deletes all tables, all materialized views, and their contents on disk.
+
+:::note
+
+[Backup your database](/docs/operations/backup/) to avoid unintended data loss.
+
+:::
+
+## Syntax
+
+
+
+### IF EXISTS
+
+An optional \`IF EXISTS\` clause may be added directly after the \`DROP TABLE\`
+keywords to indicate that the selected table should be dropped if it exists.
+
+## Description
+
+This command irremediably deletes the data in the target table. Unless the table
+was created in a different volume than the standard, see
+[CREATE TABLE IN VOLUME](/docs/reference/sql/create-table/#table-target-volume),
+in which case the table is only logically removed and data remains intact in its
+volume. In doubt, make sure you have created
+[backups](/docs/operations/backup/) of your data.
+
+Disk space is reclaimed asynchronously after the table is dropped. Ongoing table
+reads might delay space reclamation.
+
+## Example
+
+\`\`\`questdb-sql
+DROP TABLE ratings;
+\`\`\`
+
+\`\`\`questdb-sql
+DROP ALL TABLES;
+\`\`\`
+
+## See also
+
+To delete the data inside a table but keep the table and its structure, use
+[TRUNCATE](/docs/reference/sql/truncate/).
+`
+ },
+ {
+ path: "sql/explain.md",
+ title: "EXPLAIN keyword",
+ headers: ["Syntax", "Limitations:", "See also"],
+ content: `\`EXPLAIN\` displays the execution plan of an \`INSERT\`, \`SELECT\`, or \`UPDATE\`
+statement.
+
+## Syntax
+
+
+
+### Description
+
+A query execution plan shows how a statement will be implemented: which table is
+going to be accessed and how, what join method are employed, and which
+predicates are JIT-compiled etc. \`EXPLAIN\` output is a tree of nodes containing
+properties and subnodes (aka child nodes).
+
+In a plan such as:
+
+| QUERY PLAN |
+| -------------------------------------------------------------------------- |
+| Async JIT Filter |
+| filter: 100 \`<\` l |
+| workers: 1 |
+| DataFrame |
+| Row forward scan |
+| Frame forward scan on: tab |
+
+there are:
+
+- 4 nodes:
+ - Async JIT Filter
+ - DataFrame
+ - Row forward scan
+ - Frame forward scan
+- 2 properties (both belong to Async JIT Filter node):
+ - filter
+ - workers
+
+For simplicity, some nodes have special properties shown on the same line as
+type; for example, \`Filter filter: b.age=10\` or \`Limit lo: 10\`.
+
+The following list contains some plan node types:
+
+- \`Async Filter\` - a parallelized filter that evaluates expressions with Java
+ code. In certain scenarios, it also implements the \`LIMIT\` keyword.
+- \`Async JIT Filter\` - a parallelized filter that evaluates expressions with
+ Just-In-Time-compiled filter. In certain scenarios, it also implements the
+ \`LIMIT\` keyword.
+- \`Interval forward\` - scans one or more table data ranges based on the
+ designated timestamp predicates. Scan endpoints are found via a binary search
+ on timestamp column.
+- \`CachedWindow\` - container for window functions that copies data to memory and
+ sorts it, e.g. [row_number()](/docs/reference/function/window/#row_number)
+- \`Window\` - container for window functions optimized for frames ordered by
+ designated timestamp. Instead of copying the underlying dataset to memory it
+ buffers just enough per-partition values to compute function result.
+- \`Count\` - returns the count of records in subnode.
+- \`Cursor-order scan\` - scans table records using row ids taken from an index,
+ in index order - first all row ids linked to index value A, then B, etc.
+- \`DataFrame\` - full or partial table scan. It contains two children:
+ - row cursor - which iterates over rows inside a frame (e.g.
+ \`Row forward scan\`).
+ - frame cursor - which iterates over table partitions or partition chunks
+ (e.g. \`Frame forward scan\`).
+- \`Filter\` - standalone (non-JIT-compiled, non-parallelized) filter.
+- \`Frame forward/backward scan\` - scans table partitions in a specified
+ direction.
+- \`GroupBy\` - group by with or without key(s). If \`vectorized\` field shows
+ \`true\`, then the node is parallelized and uses vectorized calculations.
+- \`Hash\` - subnode of this node is used to build a hash table that is later
+ looked up (usually in a \`JOIN\` clause but also applies to \`EXCEPT\` or
+ \`INTERSECT\`).
+- \`Index forward/backward scan\` - scans all row ids associated with a given
+ \`symbol\` value from start to finish or vice versa.
+- \`Limit\` - standalone node implementing the \`LIMIT\` keyword. Other nodes can
+ implement \`LIMIT\` internally, e.g. the \`Sort\` node.
+- \`Row forward/backward scan\` - scans data frame (usually partitioned) records
+ in a specified direction.
+- \`Sort\` - sorts data. If low or hi property is specified, then the sort buffer
+ size is limited and a number of rows are skipped after sorting.
+- \`SampleBy\` - \`SAMPLE BY\` keyword implementation. If the \`fill\` is not shown,
+ it means \`fill(none)\`.
+- \`Selected Record\` - used to reorder or rename columns. It does not do any
+ significant processing on its own.
+- \`Table-order scan\` - scans table records using row ids taken from an index in
+ table (physical) order - from the lowest to highest row id.
+- \`VirtualRecord\` - adds expressions to a subnode's columns.
+
+Other node types should be easy to link to SQL and database concepts, e.g.
+\`Except\`, \`Hash Join\` or \`Lt Join\`.
+
+Many nodes, especially join and sort, have 'light' and 'heavy' variants, e.g.
+\`Hash Join Light\` and \`Hash Join\`. The former is used when child node(s) support
+efficient random access lookups (e.g. \`DataFrame\`) so storing row id in the
+buffer is enough; otherwise, the whole record needs to be copied and the 'heavy'
+factory is used.
+
+## Examples
+
+To illustrate how \`EXPLAIN\` works, consider the \`trades\` table
+[in the QuestDB demo instance](https://demo.questdb.io/):
+
+\`\`\`questdb-sql
+CREATE TABLE trades (
+ symbol SYMBOL CAPACITY 256 CACHE,
+ side SYMBOL CAPACITY 256 CACHE,
+ price DOUBLE,
+ amount DOUBLE,
+ timestamp TIMESTAMP
+) TIMESTAMP (timestamp) PARTITION BY DAY
+\`\`\`
+
+### Using \`EXPLAIN\` for the plan for \`SELECT\`
+
+The following query highlight the plan for \`ORDER BY\` for the table:
+
+\`\`\`questdb-sql
+EXPLAIN SELECT * FROM trades ORDER BY ts DESC;
+\`\`\`
+
+| QUERY PLAN |
+| ------------------------------------------------------ |
+| DataFrame |
+| Row backward scan |
+| Frame backward scan on: trades |
+
+The plan shows that no sort is required and the result is produced by scanning
+the table backward. The scanning direction is possible because the data in the
+\`trades\` table is stored in timestamp order.
+
+Now, let's check the plan for \`trades\` with a simple filter:
+
+\`\`\`questdb-sql
+EXPLAIN SELECT * FROM trades WHERE amount > 100.0;
+\`\`\`
+
+| QUERY PLAN |
+| ----------------------------------------------------------------------------- |
+| Async JIT Filter |
+| filter: 100.0 \`<\` amount |
+| workers: 1 |
+| DataFrame |
+| Row forward scan |
+| Frame forward scan on: trades |
+
+In this example, the plan shows that the \`trades\` table undergoes a full scan
+(\`DataFrame\` and subnodes) and the data is processed by the parallelized
+JIT-compiled filter.
+
+### Using \`EXPLAIN\` for the plan for \`CREATE\` and \`INSERT\`
+
+Apart from \`SELECT\`, \`EXPLAIN\` also works on \`CREATE\` and \`INSERT\` statements.
+Single-row inserts are straightforward. The examples in this section show the
+plan for more complicated \`CREATE\` and \`INSERT\` queries.
+
+\`\`\`questdb-sql
+EXPLAIN CREATE TABLE trades AS
+(
+ SELECT
+ rnd_symbol('a', 'b') symbol,
+ rnd_symbol('Buy', 'Sell') side,
+ rnd_double() price,
+ rnd_double() amount,
+ x::timestamp timestamp
+ FROM long_sequence(10)
+) TIMESTAMP(timestamp) PARTITION BY DAY;
+\`\`\`
+
+| QUERY PLAN |
+| -------------------------------------------------------------------------------------------------------------------------------- |
+| Create table: trades |
+| VirtualRecord |
+| functions: [rnd_symbol([a,b]),rnd_symbol([Buy,Sell]),rnd_double(),rnd_double(),x::timestamp] |
+| long_sequence count: 10 |
+
+The plan above shows that the data is fetched from a \`long_sequence\` cursor,
+with random data generating functions called in \`VirtualRecord\`.
+
+The same applies to the following query:
+
+\`\`\`questdb-sql
+EXPLAIN INSERT INTO trades
+ SELECT
+ rnd_symbol('a', 'b') symbol,
+ rnd_symbol('Buy', 'Sell') side,
+ rnd_double() price,
+ rnd_double() amount,
+ x::timestamp timestamp
+ FROM long_sequence(10);
+\`\`\`
+
+| QUERY PLAN |
+| -------------------------------------------------------------------------------------------------------------------------------- |
+| Insert into table: trades |
+| VirtualRecord |
+| functions: [rnd_symbol([a,b]),rnd_symbol([Buy,Sell]),rnd_double(),rnd_double(),x::timestamp] |
+| long_sequence count: 10 |
+
+Of course, statements could be much more complex than that. Consider the
+following \`UPDATE\` query:
+
+\`\`\`questdb-sql
+EXPLAIN UPDATE trades SET amount = 0 WHERE timestamp IN '2022-11-11';
+\`\`\`
+
+| QUERY PLAN |
+| ------------------------------------------------------------------------------------------------------------------------------------------ |
+| Update table: trades |
+| VirtualRecord |
+| functions: [0] |
+| DataFrame |
+| Row forward scan |
+| Interval forward scan on: trades |
+| intervals: [static=[1668124800000000,1668211199999999] |
+
+The important bit here is \`Interval forward scan\`. It means that the table is
+forward scanned only between points designated by the
+\`timestamp IN '2022-11-11'\` predicate, that is between
+\`2022-11-11 00:00:00,000000\` and \`2022-11-11 23:59:59,999999\` (shown as raw
+epoch micro values in the plan above). \`VirtualRecord\` is only used to pass 0
+constant for each row coming from \`DataFrame\`.
+
+## Limitations:
+
+To minimize resource usage, the \`EXPLAIN\` command does not execute the
+statement, to avoid paying a potentially large upfront cost for certain queries
+(especially those involving hash join or sort).
+
+\`EXPLAIN\` provides a useful indication of the query execution, but it does not
+guarantee to show the actual execution plan. This is because elements determined
+during query runtime are missing.
+
+While \`EXPLAIN\` shows the number of workers that could be used by a parallelized
+node it is only the upper limit. Depending on the data volume and system load, a
+query can use fewer workers.
+
+:::note
+
+Under the hood, the plan nodes are called \`Factories\`. Most plan nodes can be
+mapped to implementation by adding the \`RecordCursorFactory\` or
+\`FrameCursorFactory\` suffix, e.g.
+
+- \`DataFrame\` -> \`DataFrameRecordCursorFactory\`
+- \`Async JIT Filter\` -> \`AsyncJitFilteredRecordCursorFactory\`
+- \`SampleByFillNoneNotKeyed\` -> \`SampleByFillNoneNotKeyedRecordCursorFactory\`
+ while some are a bit harder to identify, e.g.
+- \`GroupByRecord vectorized: false\` ->
+ \`io.questdb.griffin.engine.groupby.GroupByRecordCursorFactory\`
+- \`GroupByRecord vectorized: true\` ->
+ \`io.questdb.griffin.engine.groupby.vect.GroupByRecordCursorFactory\`
+
+Other classes can be identified by searching for the node name in the \`toPlan()\`
+methods.
+
+:::
+
+## See also
+
+This section includes links to additional information such as tutorials:
+
+- [EXPLAIN Your SQL Query Plan](/blog/explain-sql-query-plan/)
+- [Exploring Query Plan Scan Nodes with SQL EXPLAIN](/blog/exploring-query-plan-scan-nodes-sql-explain/)
+`
+ },
+ {
+ path: "sql/fill.md",
+ title: "FILL keyword",
+ headers: [],
+ content: `Queries using a [SAMPLE BY](/docs/reference/sql/sample-by/) aggregate on data
+which has missing records may return a discontinuous series of results. The
+\`FILL\` keyword allows for specifying a fill behavior for results which have
+missing aggregates due to missing rows.
+
+Details for the \`FILL\` keyword can be found on the
+[SAMPLE BY](/docs/reference/sql/sample-by/) page.
+
+To specify a default handling for \`null\` values within queries, see the
+[coalesce() function](/docs/reference/function/conditional/#coalesce)
+documentation.
+`
+ },
+ {
+ path: "sql/group-by.md",
+ title: "GROUP BY keyword",
+ headers: ["Syntax"],
+ content: `Groups aggregation calculations by one or several keys. In QuestDB, this clause
+is [optional](/docs/concept/sql-extensions/#group-by-is-optional).
+
+## Syntax
+
+
+
+:::note
+
+QuestDB groups aggregation results implicitly and does not require the GROUP BY
+keyword. It is only supported for convenience. Using the GROUP BY clause
+explicitly will return the same results as if the clause was omitted.
+
+:::
+
+## Examples
+
+The below queries perform aggregations on a single key. Using \`GROUP BY\`
+explicitly or implicitly yields the same results:
+
+\`\`\`questdb-sql title="Single key aggregation, explicit GROUP BY"
+SELECT sensorId, avg(temp)
+FROM readings
+GROUP BY sensorId;
+\`\`\`
+
+\`\`\`questdb-sql title="Single key aggregation, implicit GROUP BY"
+SELECT sensorId, avg(temp)
+FROM readings;
+\`\`\`
+
+The below queries perform aggregations on multiple keys. Using \`GROUP BY\`
+explicitly or implicitly yields the same results:
+
+\`\`\`questdb-sql title="Multiple key aggregation, explicit GROUP BY"
+SELECT sensorId, sensorType, avg(temp)
+FROM readings
+GROUP BY sensorId,sensorType;
+\`\`\`
+
+\`\`\`questdb-sql title="Multiple key aggregation, implicit GROUP BY"
+SELECT sensorId, sensorType, avg(temp)
+FROM readings;
+\`\`\`
+
+When used explicitly, the list of keys in the \`GROUP BY\` clause must match the
+list of keys in the \`SELECT\` clause, otherwise an error will be returned:
+
+\`\`\`questdb-sql title="Error - Column b is missing in the GROUP BY clause"
+SELECT a, b, avg(temp)
+FROM tab
+GROUP BY a;
+\`\`\`
+
+\`\`\`questdb-sql title="Error - Column b is missing in the SELECT clause"
+SELECT a, avg(temp)
+FROM tab
+GROUP BY a, b;
+\`\`\`
+
+\`\`\`questdb-sql title="Success - Columns match"
+SELECT a, b, avg(temp)
+FROM tab
+GROUP BY a, b;
+\`\`\`
+`
+ },
+ {
+ path: "sql/insert.md",
+ title: "INSERT keyword",
+ headers: ["Syntax"],
+ content: `\`INSERT\` ingests selected data into a database table.
+
+## Syntax
+
+Inserting values directly or using sub-queries:
+
+
+
+Inserting using sub-query alias:
+
+
+
+### Description
+
+:::note
+
+If the target partition is
+[attached by a symbolic link](/docs/reference/sql/alter-table-attach-partition/#symbolic-links),
+the partition is read-only. \`INSERT\` operation on a read-only partition triggers
+a critical-level log in the server, and the insert is a no-op.
+
+:::
+
+Inserting values directly or using sub-queries:
+
+- \`VALUE\`: Directly defines the values to be inserted.
+- \`SELECT\`: Inserts values based on the result of a
+ [SELECT](/docs/reference/sql/select/) query
+
+Setting sub-query alias:
+
+- \`WITH AS\`: Inserts values based on a sub-query, to which an alias is given by
+ using [WITH](/docs/reference/sql/with/).
+
+Parameter:
+
+- \`batch\` expects a \`batchCount\` (integer) value defining how many records to
+ process at any one time.
+
+## Examples
+
+\`\`\`questdb-sql title="Inserting all columns"
+INSERT INTO trades
+VALUES(
+ '2021-10-05T11:31:35.878Z',
+ 'AAPL',
+ 255,
+ 123.33,
+ 'B');
+\`\`\`
+
+\`\`\`questdb-sql title="Bulk inserts"
+INSERT INTO trades
+VALUES
+ ('2021-10-05T11:31:35.878Z', 'AAPL', 245, 123.4, 'C'),
+ ('2021-10-05T12:31:35.878Z', 'AAPL', 245, 123.3, 'C'),
+ ('2021-10-05T13:31:35.878Z', 'AAPL', 250, 123.1, 'C'),
+ ('2021-10-05T14:31:35.878Z', 'AAPL', 250, 123.0, 'C');
+\`\`\`
+
+\`\`\`questdb-sql title="Specifying schema"
+INSERT INTO trades (timestamp, symbol, quantity, price, side)
+VALUES(
+ to_timestamp('2019-10-17T00:00:00', 'yyyy-MM-ddTHH:mm:ss'),
+ 'AAPL',
+ 255,
+ 123.33,
+ 'B');
+\`\`\`
+
+:::note
+
+Columns can be omitted during \`INSERT\` in which case the value will be \`NULL\`
+
+:::
+
+\`\`\`questdb-sql title="Inserting only specific columns"
+INSERT INTO trades (timestamp, symbol, price)
+VALUES(to_timestamp('2019-10-17T00:00:00', 'yyyy-MM-ddTHH:mm:ss'),'AAPL','B');
+\`\`\`
+
+### Inserting query results
+
+This method allows you to insert as many rows as your query returns at once.
+
+\`\`\`questdb-sql title="Insert as select"
+INSERT INTO confirmed_trades
+ SELECT timestamp, instrument, quantity, price, side
+ FROM unconfirmed_trades
+ WHERE trade_id = '47219345234';
+\`\`\`
+
+Using the [\`WITH\` keyword](/docs/reference/sql/with/) to set up an alias for a
+\`SELECT\` sub-query:
+
+\`\`\`questdb-sql title="Insert with sub-query"
+WITH confirmed_id AS (
+ SELECT * FROM unconfirmed_trades
+ WHERE trade_id = '47219345234'
+)
+INSERT INTO confirmed_trades
+SELECT * FROM confirmed_id;
+\`\`\`
+
+:::note
+
+Since QuestDB v7.4.0, the default behaviour for \`INSERT INTO SELECT\` has been
+changed.
+
+Previously, the table would be created atomically. For large tables, this
+requires a significant amount of RAM, and can cause errors if the database runs
+out of memory.
+
+By default, this will be performed in batches. If the query fails, partial data
+may be inserted.
+
+If this is a problem, it is recommended to use the ATOMIC keyword
+(\`INSERT ATOMIC INTO\`). Alternatively, enabling deduplication on the table will
+allow you to perform an idempotent insert to re-insert any missed data.
+
+:::
+
+### ATOMIC
+
+Inserts can be performed created atomically, which first loads all of the data
+and then commits in a single transaction.
+
+This requires the data to be available in memory all at once, so for large
+inserts, this may have performance issues.
+
+To force this behaviour, one can use the \`ATOMIC\` keyword:
+
+\`\`\`questdb-sql title="Insert as select atomically"
+INSERT ATOMIC INTO confirmed_trades
+ SELECT timestamp, instrument, quantity, price, side
+ FROM unconfirmed_trades
+ WHERE trade_id = '47219345234';
+\`\`\`
+
+### BATCH
+
+By default, data will be inserted in batches.
+
+The size of the batches can be configured:
+
+- globally, by setting the \`cairo.sql.insert.model.batch.size\` configuration
+ option in \`server.conf\`.
+- locally, by using the \`BATCH\` keyword in the \`INSERT INTO\` statement.
+
+The composition is \`INSERT\` + \`BATCH\` + number of rows + \`INTO\` + \`TABLE\`,
+followed by the \`SELECT\` statement.
+
+In our example, we use 4096 as the batch size:
+
+\`\`\`questdb-sql title="Insert as select batched"
+INSERT BATCH 4096 INTO confirmed_trades
+ SELECT timestamp, instrument, quantity, price, side
+ FROM unconfirmed_trades
+ WHERE trade_id = '47219345234';
+\`\`\`
+
+One can also specify the out-of-order commit lag for these batched writes, using
+the o3MaxLag option:
+
+\`\`\`questdb-sql title="Insert as select with batching and O3 lag"
+INSERT BATCH 4096 o3MaxLag '1s' INTO confirmed_trades
+ SELECT timestamp, instrument, quantity, price, side
+ FROM unconfirmed_trades
+ WHERE trade_id = '47219345234';
+\`\`\`
+`
+ },
+ {
+ path: "sql/join.md",
+ title: "JOIN keyword",
+ headers: ["Syntax", "Execution order", "Implicit joins", "Using the `ON` clause for the `JOIN` predicate", "ASOF JOIN", "(INNER) JOIN", "LEFT (OUTER) JOIN", "CROSS JOIN", "LT JOIN", "SPLICE JOIN"],
+ content: `QuestDB supports the type of joins you can frequently find in
+[relational databases](/glossary/relational-database/): \`INNER\`, \`LEFT (OUTER)\`,
+\`CROSS\`. Additionally, it implements joins which are particularly useful for
+time-series analytics: \`ASOF\`, \`LT\`, and \`SPLICE\`. \`FULL\` joins are not yet
+implemented and are on our roadmap.
+
+All supported join types can be combined in a single SQL statement; QuestDB
+SQL's optimizer determines the best execution order and algorithms.
+
+There are no known limitations on the size of tables or sub-queries used in
+joins and there are no limitations on the number of joins, either.
+
+## Syntax
+
+High-level overview:
+
+
+
+- \`selectClause\` - see [SELECT](/docs/reference/sql/select/) for more
+ information.
+- \`whereClause\` - see [WHERE](/docs/reference/sql/where/) for more information.
+- The specific syntax for \`joinClause\` depends on the type of \`JOIN\`:
+
+ - \`INNER\` and \`LEFT\` \`JOIN\` has a mandatory \`ON\` clause allowing arbitrary
+ \`JOIN\` predicates, \`operator\`:
+
+ 
+
+ - \`ASOF\`, \`LT\`, and \`SPLICE\` \`JOIN\` has optional \`ON\` clause allowing only the
+ \`=\` predicate.
+ - \`ASOF\` and \`LT\` join additionally allows an optional \`TOLERANCE\` clause:
+
+ 
+
+ - \`CROSS JOIN\` does not allow any \`ON\` clause:
+
+ 
+
+Columns from joined tables are combined in a single row. Columns with the same
+name originating from different tables will be automatically aliased to create a
+unique column namespace of the resulting set.
+
+Though it is usually preferable to explicitly specify join conditions, QuestDB
+will analyze \`WHERE\` clauses for implicit join conditions and will derive
+transient join conditions where necessary.
+
+## Execution order
+
+Join operations are performed in order of their appearance in a SQL query. The
+following query performs a join on a table with a very small table (just one row
+in this example) and a bigger table with 10 million rows:
+
+\`\`\`questdb-sql
+WITH
+ Manytrades AS
+ (SELECT * FROM trades limit 10000000),
+ Lookup AS
+ (SELECT 'BTC-USD' AS Symbol, 'Bitcoin/USD Pair' AS Description)
+SELECT *
+FROM Lookup
+INNER JOIN ManyTrades
+ ON Lookup.symbol = Manytrades.symbol;
+\`\`\`
+
+The performance of this query can be improved by rewriting the query as follows:
+
+\`\`\`questdb-sql
+WITH
+ Manytrades AS
+ (SELECT * FROM trades limit 10000000),
+ Lookup AS
+ (SELECT 'BTC-USD' AS Symbol, 'Bitcoin/USD Pair' AS Description)
+SELECT *
+FROM ManyTrades
+INNER JOIN Lookup
+ ON Lookup.symbol = Manytrades.symbol;
+\`\`\`
+
+As a general rule, whenever you have a table significantly larger than the
+other, try to use the large one first. If you use \`EXPLAIN\` with the queries
+above, you should see the first version needs to Hash over 10 million rows,
+while the second version needs to Hash only over 1 row.
+
+## Implicit joins
+
+It is possible to join two tables using the following syntax:
+
+\`\`\`questdb-sql
+SELECT *
+FROM a, b
+WHERE a.id = b.id;
+\`\`\`
+
+The type of join as well as the column are inferred from the \`WHERE\` clause, and
+may be either an \`INNER\` or \`CROSS\` join. For the example above, the equivalent
+explicit statement would be:
+
+\`\`\`questdb-sql
+SELECT *
+FROM a
+JOIN b ON (id);
+\`\`\`
+
+## Using the \`ON\` clause for the \`JOIN\` predicate
+
+When tables are joined on a column that has the same name in both tables you can
+use the \`ON (column)\` shorthand.
+
+When the \`ON\` clause is permitted (all except \`CROSS JOIN\`), it is possible to
+join multiple columns.
+
+For example, the following two tables contain identical column names \`symbol\`
+and \`side\`:
+
+\`mayTrades\`:
+
+
+
+| symbol | side | total |
+| ------- | ---- | ------ |
+| ADA-BTC | buy | 8079 |
+| ADA-BTC | sell | 7678 |
+| ADA-USD | buy | 308271 |
+| ADA-USD | sell | 279624 |
+
+
+
+\`juneTrades\`:
+
+
+
+| symbol | side | total |
+| ------- | ---- | ------ |
+| ADA-BTC | buy | 10253 |
+| ADA-BTC | sell | 17460 |
+| ADA-USD | buy | 312359 |
+| ADA-USD | sell | 245066 |
+
+
+
+It is possible to add multiple JOIN ON condition:
+
+\`\`\`questdb-sql
+WITH
+ mayTrades AS (
+ SELECT symbol, side, COUNT(*) as total
+ FROM trades
+ WHERE timestamp in '2024-05'
+ ORDER BY Symbol
+ LIMIT 4
+ ),
+ juneTrades AS (
+ SELECT symbol, side, COUNT(*) as total
+ FROM trades
+ WHERE timestamp in '2024-06'
+ ORDER BY Symbol
+ LIMIT 4
+ )
+SELECT *
+FROM mayTrades
+JOIN JuneTrades
+ ON mayTrades.symbol = juneTrades.symbol
+ AND mayTrades.side = juneTrades.side;
+\`\`\`
+
+The query can be simplified further since the column names are identical:
+
+\`\`\`questdb-sql
+WITH
+ mayTrades AS (
+ SELECT symbol, side, COUNT(*) as total
+ FROM trades
+ WHERE timestamp in '2024-05'
+ ORDER BY Symbol
+ LIMIT 4
+ ),
+ juneTrades AS (
+ SELECT symbol, side, COUNT(*) as total
+ FROM trades
+ WHERE timestamp in '2024-06'
+ ORDER BY Symbol
+ LIMIT 4
+ )
+SELECT *
+FROM mayTrades
+JOIN JuneTrades ON (symbol, side);
+\`\`\`
+
+The result of both queries is the following:
+
+
+
+| symbol | symbol1 | side | side1 | total | total1 |
+| ------- | ------- | ---- | ----- | ------ | ------ |
+| ADA-BTC | ADA-BTC | buy | buy | 8079 | 10253 |
+| ADA-BTC | ADA-BTC | sell | sell | 7678 | 17460 |
+| ADA-USD | ADA-USD | buy | buy | 308271 | 312359 |
+| ADA-USD | ADA-USD | sell | sell | 279624 | 245066 |
+
+
+
+## ASOF JOIN
+
+ASOF JOIN is a powerful time-series join extension.
+
+It has its own page, [ASOF JOIN](/docs/reference/sql/asof-join/).
+
+## (INNER) JOIN
+
+\`(INNER) JOIN\` returns rows from two tables where the records on the compared
+column have matching values in both tables. \`JOIN\` is interpreted as
+\`INNER JOIN\` by default, making the \`INNER\` keyword implicit.
+
+The query we just saw above is an example. It returns the \`symbol\`, \`side\` and
+\`total\` from the \`mayTrades\` subquery, and adds the \`symbol\`, \`side\`, and
+\`total\` from the \`juneTrades\` subquery. Both tables are matched based on the
+\`symbol\` and \`side\`, as specified on the \`ON\` condition.
+
+## LEFT (OUTER) JOIN
+
+\`LEFT OUTER JOIN\` or simply \`LEFT JOIN\` returns **all** records from the left
+table, and if matched, the records of the right table. When there is no match
+for the right table, it returns \`NULL\` values in right table fields.
+
+The general syntax is as follows:
+
+\`\`\`questdb-sql title="LEFT JOIN ON"
+WITH
+ Manytrades AS
+ (SELECT * FROM trades limit 100),
+ Lookup AS
+ (SELECT 'BTC-USD' AS Symbol, 'Bitcoin/USD Pair' AS Description)
+SELECT *
+FROM ManyTrades
+LEFT OUTER JOIN Lookup
+ ON Lookup.symbol = Manytrades.symbol;
+\`\`\`
+
+In this example, the result will have 100 rows, one for each row on the
+\`ManyTrades\` subquery. When there is no match with the \`Lookup\` subquery, the
+columns \`Symbol1\` and \`Description\` will be \`null\`.
+
+\`\`\`sql
+-- Omitting 'OUTER' makes no difference:
+WITH
+ Manytrades AS
+ (SELECT * FROM trades limit 100),
+ Lookup AS
+ (SELECT 'BTC-USD' AS Symbol, 'Bitcoin/USD Pair' AS Description)
+SELECT *
+FROM ManyTrades
+LEFT JOIN Lookup
+ ON Lookup.symbol = Manytrades.symbol;
+\`\`\`
+
+A \`LEFT OUTER JOIN\` query can also be used to select all rows in the left table
+that do not exist in the right table.
+
+\`\`\`questdb-sql
+WITH
+ Manytrades AS
+ (SELECT * FROM trades limit 100),
+ Lookup AS
+ (SELECT 'BTC-USD' AS Symbol, 'Bitcoin/USD Pair' AS Description)
+SELECT *
+FROM ManyTrades
+LEFT OUTER JOIN Lookup
+ ON Lookup.symbol = Manytrades.symbol
+WHERE Lookup.Symbol = NULL;
+\`\`\`
+
+In this case, the result has 71 rows out of the 100 in the larger table, and the
+columns corresponding to the \`Lookup\` table are all \`NULL\`.
+
+## CROSS JOIN
+
+\`CROSS JOIN\` returns the Cartesian product of the two tables being joined and
+can be used to create a table with all possible combinations of columns.
+
+The following query is joining a table (a subquery in this case) with itself, to
+compare row by row if we have any rows with exactly the same values for all the
+columns except the timestamp, and if the timestamps are within 10 seconds from
+each other:
+
+\`\`\`questdb-sql
+-- detect potential duplicates, with same values
+-- and within a 10 seconds range
+
+WITH t AS (
+ SELECT * FROM trades WHERE timestamp IN '2024-06-01'
+)
+SELECT * from t CROSS JOIN t AS t2
+WHERE t.timestamp < t2.timestamp
+ AND datediff('s', t.timestamp , t2.timestamp ) < 10
+ AND t.symbol = t2.symbol
+ AND t.side = t2.side
+ AND t.price = t2.price
+ AND t.amount = t2.amount;
+\`\`\`
+
+:::note
+
+\`CROSS JOIN\` does not have an \`ON\` clause.
+
+:::
+
+## LT JOIN
+
+Similar to [\`ASOF JOIN\`](/docs/reference/sql/asof-join/), \`LT JOIN\` joins two different time-series measured. For
+each row in the first time-series, the \`LT JOIN\` takes from the second
+time-series a timestamp that meets both of the following criteria:
+
+- The timestamp is the closest to the first timestamp.
+- The timestamp is **strictly prior to** the first timestamp.
+
+In other words: \`LT JOIN\` won't join records with equal timestamps.
+
+### Example
+
+Consider the following tables:
+
+Table \`tradesA\`:
+
+
+
+| timestamp | price |
+| --------------------------- | -------- |
+| 2022-03-08T18:03:57.710419Z | 39269.98 |
+| 2022-03-08T18:03:58.357448Z | 39265.31 |
+| 2022-03-08T18:03:58.357448Z | 39265.31 |
+
+
+
+Table \`tradesB\`:
+
+
+
+| timestamp | price |
+| --------------------------- | -------- |
+| 2022-03-08T18:03:57.710419Z | 39269.98 |
+| 2022-03-08T18:03:58.357448Z | 39265.31 |
+| 2022-03-08T18:03:58.357448Z | 39265.31 |
+
+
+
+An \`LT JOIN\` can be built using the following query:
+
+\`\`\`questdb-sql
+WITH miniTrades AS (
+ SELECT timestamp, price
+ FROM TRADES
+ WHERE symbol = 'BTC-USD'
+ LIMIT 3
+)
+SELECT tradesA.timestamp, tradesB.timestamp, tradesA.price
+FROM miniTrades tradesA
+LT JOIN miniTrades tradesB;
+\`\`\`
+
+The query above returns the following results:
+
+
+
+| timestamp | timestamp1 | price |
+| --------------------------- | --------------------------- | -------- |
+| 2022-03-08T18:03:57.710419Z | NULL | 39269.98 |
+| 2022-03-08T18:03:58.357448Z | 2022-03-08T18:03:57.710419Z | 39265.31 |
+| 2022-03-08T18:03:58.357448Z | 2022-03-08T18:03:57.710419Z | 39265.31 |
+
+
+
+Notice how the first record in the \`tradesA\` table is not joined with any record
+in the \`tradesB\` table. This is because there is no record in the \`tradesB\`
+table with a timestamp prior to the timestamp of the first record in the
+\`tradesA\` table.
+
+Similarly, the second record in the \`tradesB\` table is joined with the first
+record in the \`tradesA\` table because the timestamp of the first record in the
+\`tradesB\` table is prior to the timestamp of the second record in the \`tradesA\`
+table.
+
+:::note
+
+As seen on this example, \`LT\` join is often useful to join a table to itself in
+order to get preceding values for every row.
+
+:::
+
+The \`ON\` clause can also be used in combination with \`LT JOIN\` to join both by
+timestamp and column values.
+
+### TOLERANCE clause
+The \`TOLERANCE\` clause enhances LT JOIN by limiting how far back in time the join should look for a match in the right
+table. The \`TOLERANCE\` parameter accepts a time interval value (e.g., 2s, 100ms, 1d).
+
+When specified, a record from the left table t1 at t1.ts will only be joined with a record from the right table t2 at
+t2.ts if both conditions are met: \`t2.ts < t1.ts\` and \`t1.ts - t2.ts <= tolerance_value\`
+
+This ensures that the matched record from the right table is not only the latest one on or before t1.ts, but also within
+the specified time window.
+
+\`\`\`questdb-sql title="LT JOIN with a TOLERANCE parameter"
+SELECT ...
+FROM table1
+LT JOIN table2 TOLERANCE 10s
+[WHERE ...]
+\`\`\`
+
+The interval_literal must be a valid QuestDB interval string, like '5s' (5 seconds), '100ms' (100 milliseconds),
+'2m' ( 2 minutes), '3h' (3 hours), or '1d' (1 day).
+
+#### Supported Units for interval_literal
+The \`TOLERANCE\` interval literal supports the following time unit qualifiers:
+- U: Microseconds
+- T: Milliseconds
+- s: Seconds
+- m: Minutes
+- h: Hours
+- d: Days
+- w: Weeks
+
+For example, '100U' is 100 microseconds, '50T' is 50 milliseconds, '2s' is 2 seconds, '30m' is 30 minutes,
+'1h' is 1 hour, '7d' is 7 days, and '2w' is 2 weeks. Please note that months (M) and years (Y) are not supported as
+units for the \`TOLERANCE\` clause.
+
+See [\`ASOF JOIN documentation\`](/docs/reference/sql/asof-join#tolerance-clause) for more examples with the \`TOLERANCE\` clause.
+
+## SPLICE JOIN
+
+\`SPLICE JOIN\` is a full \`ASOF JOIN\`. It will return all the records from both
+tables. For each record from left table splice join will find prevailing record
+from right table and for each record from right table - prevailing record from
+left table.
+
+Considering the following tables:
+
+Table \`buy\` (the left table):
+
+
+
+| timestamp | price |
+| --------------------------- | -------- |
+| 2024-06-22T00:00:00.039906Z | 0.092014 |
+| 2024-06-22T00:00:00.343909Z | 9.805 |
+
+
+
+The \`sell\` table (the right table):
+
+
+
+| timestamp | price |
+| --------------------------- | -------- |
+| 2024-06-22T00:00:00.222534Z | 64120.28 |
+| 2024-06-22T00:00:00.222534Z | 64120.28 |
+
+
+
+A \`SPLICE JOIN\` can be built as follows:
+
+\`\`\`questdb-sql
+WITH
+buy AS ( -- select the first 5 buys in June 22
+ SELECT timestamp, price FROM trades
+ WHERE timestamp IN '2024-06-22' AND side = 'buy' LIMIT 2
+),
+sell AS ( -- select the first 5 sells in June 22
+ SELECT timestamp, price FROM trades
+ WHERE timestamp IN '2024-06-22' AND side = 'sell' LIMIT 2
+)
+SELECT
+ buy.timestamp, sell.timestamp, buy.price, sell.price
+FROM buy
+SPLICE JOIN sell;
+\`\`\`
+
+This query returns the following results:
+
+
+
+| timestamp | timestamp1 | price | price1 |
+| --------------------------- | --------------------------- | -------- | -------- |
+| 2024-06-22T00:00:00.039906Z | NULL | 0.092014 | NULL |
+| 2024-06-22T00:00:00.039906Z | 2024-06-22T00:00:00.222534Z | 0.092014 | 64120.28 |
+| 2024-06-22T00:00:00.039906Z | 2024-06-22T00:00:00.222534Z | 0.092014 | 64120.28 |
+| 2024-06-22T00:00:00.343909Z | 2024-06-22T00:00:00.222534Z | 9.805 | 64120.28 |
+
+
+
+Note that the above query does not use the optional \`ON\` clause. In case you
+need additional filtering on the two tables, the \`ON\` clause can also be used.
+`
+ },
+ {
+ path: "sql/latest-on.md",
+ title: "LATEST ON keyword",
+ headers: ["Syntax", "Description"],
+ content: `Retrieves the latest entry by timestamp for a given key or combination of keys,
+for scenarios where multiple time series are stored in the same table.
+
+## Syntax
+
+
+
+where:
+
+- \`columnName\` used in the \`LATEST ON\` part of the clause is a \`TIMESTAMP\`
+ column.
+- \`columnName\` list used in the \`PARTITION BY\` part of the clause is a list of
+ columns of one of the following types: \`SYMBOL\`, \`STRING\`, \`BOOLEAN\`, \`SHORT\`,
+ \`INT\`, \`LONG\`, \`LONG256\`, \`CHAR\`.
+
+## Description
+
+\`LATEST ON\` is used as part of a [SELECT statement](/docs/reference/sql/select/)
+for returning the most recent records per unique time series identified by the
+\`PARTITION BY\` column values.
+
+\`LATEST ON\` requires a
+[designated timestamp](/docs/concept/designated-timestamp/) column. Use
+[sub-queries](#latest-on-over-sub-query) for tables without the designated
+timestamp.
+
+The query syntax has an impact on the [execution order](#execution-order) of the
+\`LATEST ON\` clause and the \`WHERE\` clause.
+
+To illustrate how \`LATEST ON\` is intended to be used, consider the \`trips\` table
+[in the QuestDB demo instance](https://demo.questdb.io/). This table has a
+\`payment_type\` column as \`SYMBOL\` type which specifies the method of payment per
+trip. We can find the most recent trip for each unique method of payment with
+the following query:
+
+\`\`\`questdb-sql
+SELECT payment_type, pickup_datetime, trip_distance
+FROM trips
+LATEST ON pickup_datetime PARTITION BY payment_type;
+\`\`\`
+
+| payment_type | pickup_datetime | trip_distance |
+| ------------ | --------------------------- | ------------- |
+| Dispute | 2014-12-31T23:55:27.000000Z | 1.2 |
+| Voided | 2019-06-27T17:56:45.000000Z | 1.9 |
+| Unknown | 2019-06-30T23:57:42.000000Z | 3.9 |
+| No Charge | 2019-06-30T23:59:30.000000Z | 5.2 |
+| Cash | 2019-06-30T23:59:54.000000Z | 2 |
+| Card | 2019-06-30T23:59:56.000000Z | 1 |
+
+The above query returns the latest value within each time series stored in the
+table. Those time series are determined based on the values in the column(s)
+specified in the \`LATEST ON\` clause. In our example those time series are
+represented by different payment types. Then the column used in the \`LATEST ON\`
+part of the clause stands for the designated timestamp column for the table.
+This allows the database to find the latest value within each time series.
+
+## Examples
+
+For the next examples, we can create a table called \`balances\` with the
+following SQL:
+
+\`\`\`questdb-sql
+CREATE TABLE balances (
+ cust_id SYMBOL,
+ balance_ccy SYMBOL,
+ balance DOUBLE,
+ ts TIMESTAMP
+) TIMESTAMP(ts) PARTITION BY DAY;
+
+insert into balances values ('1', 'USD', 600.5, '2020-04-21T16:03:43.504432Z');
+insert into balances values ('2', 'USD', 950, '2020-04-21T16:08:34.404665Z');
+insert into balances values ('2', 'EUR', 780.2, '2020-04-21T16:11:22.704665Z');
+insert into balances values ('1', 'USD', 1500, '2020-04-21T16:11:32.904234Z');
+insert into balances values ('1', 'EUR', 650.5, '2020-04-22T16:11:32.904234Z');
+insert into balances values ('2', 'USD', 900.75, '2020-04-22T16:12:43.504432Z');
+insert into balances values ('2', 'EUR', 880.2, '2020-04-22T16:18:34.404665Z');
+insert into balances values ('1', 'USD', 330.5, '2020-04-22T16:20:14.404997Z');
+\`\`\`
+
+This provides us with a table with the following content:
+
+| cust_id | balance_ccy | balance | ts |
+| ------- | ----------- | ------- | --------------------------- |
+| 1 | USD | 600.5 | 2020-04-21T16:01:22.104234Z |
+| 2 | USD | 950 | 2020-04-21T16:03:43.504432Z |
+| 2 | EUR | 780.2 | 2020-04-21T16:08:34.404665Z |
+| 1 | USD | 1500 | 2020-04-21T16:11:22.704665Z |
+| 1 | EUR | 650.5 | 2020-04-22T16:11:32.904234Z |
+| 2 | USD | 900.75 | 2020-04-22T16:12:43.504432Z |
+| 2 | EUR | 880.2 | 2020-04-22T16:18:34.404665Z |
+| 1 | USD | 330.5 | 2020-04-22T16:20:14.404997Z |
+
+### Single column
+
+When a single \`symbol\` column is specified in \`LATEST ON\` queries, the query
+will end after all distinct symbol values are found.
+
+\`\`\`questdb-sql title="Latest records by customer ID"
+SELECT * FROM balances
+LATEST ON ts PARTITION BY cust_id;
+\`\`\`
+
+The query returns two rows with the most recent records per unique \`cust_id\`
+value:
+
+| cust_id | balance_ccy | balance | ts |
+| ------- | ----------- | ------- | --------------------------- |
+| 2 | EUR | 880.2 | 2020-04-22T16:18:34.404665Z |
+| 1 | USD | 330.5 | 2020-04-22T16:20:14.404997Z |
+
+### Multiple columns
+
+When multiple columns are specified in \`LATEST ON\` queries, the returned results
+are the most recent **unique combinations** of the column values. This example
+query returns \`LATEST ON\` customer ID and balance currency:
+
+\`\`\`questdb-sql title="Latest balance by customer and currency"
+SELECT cust_id, balance_ccy, balance
+FROM balances
+LATEST ON ts PARTITION BY cust_id, balance_ccy;
+\`\`\`
+
+The results return the most recent records for each unique combination of
+\`cust_id\` and \`balance_ccy\`.
+
+| cust_id | balance_ccy | balance | inactive | ts |
+| ------- | ----------- | ------- | -------- | --------------------------- |
+| 1 | EUR | 650.5 | FALSE | 2020-04-22T16:11:32.904234Z |
+| 2 | USD | 900.75 | FALSE | 2020-04-22T16:12:43.504432Z |
+| 2 | EUR | 880.2 | FALSE | 2020-04-22T16:18:34.404665Z |
+| 1 | USD | 330.5 | FALSE | 2020-04-22T16:20:14.404997Z |
+
+#### Performance considerations
+
+When the \`LATEST ON\` clause contains a single \`symbol\` column, QuestDB will know
+all distinct values upfront and stop scanning table contents once the latest
+entry has been found for each distinct symbol value.
+
+When the \`LATEST ON\` clause contains multiple columns, QuestDB has to scan the
+entire table to find distinct combinations of column values.
+
+Although scanning is fast, performance will degrade on hundreds of millions of
+records. If there are multiple columns in the \`LATEST ON\` clause, this will
+result in a full table scan.
+
+### LATEST ON over sub-query
+
+For this example, we can create another table called \`unordered_balances\` with
+the following SQL:
+
+\`\`\`questdb-sql
+CREATE TABLE unordered_balances (
+ cust_id SYMBOL,
+ balance_ccy SYMBOL,
+ balance DOUBLE,
+ ts TIMESTAMP
+);
+
+insert into unordered_balances values ('2', 'USD', 950, '2020-04-21T16:08:34.404665Z');
+insert into unordered_balances values ('1', 'USD', 330.5, '2020-04-22T16:20:14.404997Z');
+insert into unordered_balances values ('2', 'USD', 900.75, '2020-04-22T16:12:43.504432Z');
+insert into unordered_balances values ('1', 'USD', 1500, '2020-04-21T16:11:32.904234Z');
+insert into unordered_balances values ('1', 'USD', 600.5, '2020-04-21T16:03:43.504432Z');
+insert into unordered_balances values ('1', 'EUR', 650.5, '2020-04-22T16:11:32.904234Z');
+insert into unordered_balances values ('2', 'EUR', 880.2, '2020-04-22T16:18:34.404665Z');
+insert into unordered_balances values ('2', 'EUR', 780.2, '2020-04-21T16:11:22.704665Z');
+\`\`\`
+
+Note that this table doesn't have a designated timestamp column and also
+contains time series that are unordered by \`ts\` column.
+
+Due to the absent designated timestamp column, we can't use \`LATEST ON\` directly
+on this table, but it's possible to use \`LATEST ON\` over a sub-query:
+
+\`\`\`questdb-sql title="Latest balance by customer over unordered data"
+(SELECT * FROM unordered_balances)
+LATEST ON ts PARTITION BY cust_id;
+\`\`\`
+
+Just like with the \`balances\` table, the query returns two rows with the most
+recent records per unique \`cust_id\` value:
+
+| cust_id | balance_ccy | balance | ts |
+| ------- | ----------- | ------- | --------------------------- |
+| 2 | EUR | 880.2 | 2020-04-22T16:18:34.404665Z |
+| 1 | USD | 330.5 | 2020-04-22T16:20:14.404997Z |
+
+### Execution order
+
+The following queries illustrate how to change the execution order in a query by
+using brackets.
+
+#### WHERE first
+
+\`\`\`questdb-sql
+SELECT * FROM balances
+WHERE balance > 800
+LATEST ON ts PARTITION BY cust_id;
+\`\`\`
+
+This query executes \`WHERE\` before \`LATEST ON\` and returns the most recent
+balance which is above 800. The execution order is as follows:
+
+- filter out all balances below 800
+- find the latest balance by \`cust_id\`
+
+| cust_id | balance_ccy | balance | ts |
+| ------- | ----------- | ------- | --------------------------- |
+| 1 | USD | 1500 | 2020-04-22T16:11:22.704665Z |
+| 2 | EUR | 880.2 | 2020-04-22T16:18:34.404665Z |
+
+#### LATEST ON first
+
+\`\`\`questdb-sql
+(SELECT * FROM balances LATEST ON ts PARTITION BY cust_id) --note the brackets
+WHERE balance > 800;
+\`\`\`
+
+This query executes \`LATEST ON\` before \`WHERE\` and returns the most recent
+records, then filters out those below 800. The steps are:
+
+1. Find the latest balances by customer ID.
+2. Filter out balances below 800. Since the latest balance for customer 1 is
+ equal to 330.5, it is filtered out in this step.
+
+| cust_id | balance_ccy | balance | inactive | ts |
+| ------- | ----------- | ------- | -------- | --------------------------- |
+| 2 | EUR | 880.2 | FALSE | 2020-04-22T16:18:34.404665Z |
+
+#### Combination
+
+It's possible to combine a time-based filter with the balance filter from the
+previous example to query the latest values for the \`2020-04-21\` date and filter
+out those below 800.
+
+\`\`\`questdb-sql
+(balances WHERE ts in '2020-04-21' LATEST ON ts PARTITION BY cust_id)
+WHERE balance > 800;
+\`\`\`
+
+Since QuestDB allows you to omit the \`SELECT * FROM\` part of the query, we
+omitted it to keep the query compact.
+
+Such a combination is very powerful since it allows you to find the latest
+values for a time slice of the data and then apply a filter to them in a single
+query.
+`
+ },
+ {
+ path: "sql/limit.md",
+ title: "LIMIT keyword",
+ headers: ["Syntax"],
+ content: `Specify the number and position of records returned by a
+[SELECT statement](/docs/reference/sql/select/).
+
+In other implementations of SQL, this is sometimes replaced by statements such
+as \`OFFSET\` or \`ROWNUM\` Our implementation of \`LIMIT\` encompasses both in one
+statement.
+
+## Syntax
+
+
+
+- \`numberOfRecords\` is the number of records to return.
+- \`upperBound\` and \`lowerBound\` is the return range. \`lowerBound\` is
+ **exclusive** and \`upperBound\` is **inclusive**.
+
+A \`positive\` number will return the \`first\` \`n\` records. A \`negative\` number
+will return the \`last\` \`n\` records.
+
+## Examples
+
+\`\`\`questdb-sql title="First 5 results"
+SELECT * FROM ratings LIMIT 5;
+\`\`\`
+
+\`\`\`questdb-sql title="Last 5 results"
+SELECT * FROM ratings LIMIT -5;
+\`\`\`
+
+\`\`\`questdb-sql title="Range results - this will return records 3, 4 and 5"
+SELECT * FROM ratings LIMIT 2,5;
+\`\`\`
+
+\`negative\` range parameters will return results from the bottom of the table.
+Assuming a table with \`n\` records, the following will return records between
+\`n-7\` (exclusive) and \`n-3\` (inclusive), i.e \`{n-6, n-5, n-4, n-3}\`. Both
+\`upperBound\` and \`lowerBound\` must be negative numbers, in this case:
+
+\`\`\`questdb-sql title="Range results (negative)"
+SELECT * FROM ratings LIMIT -7, -3;
+\`\`\`
+`
+ },
+ {
+ path: "sql/order-by.md",
+ title: "ORDER BY keyword",
+ headers: ["Syntax", "Notes"],
+ content: `Sort the results of a query in ascending or descending order.
+
+## Syntax
+
+
+
+Default order is \`ASC\`. You can omit to order in ascending order.
+
+## Notes
+
+Ordering data requires holding it in RAM. For large operations, we suggest you
+check you have sufficient memory to perform the operation.
+
+## Examples
+
+\`\`\`questdb-sql title="Omitting ASC will default to ascending order"
+ratings ORDER BY userId;
+\`\`\`
+
+\`\`\`questdb-sql title="Ordering in descending order"
+ratings ORDER BY userId DESC;
+\`\`\`
+
+\`\`\`questdb-sql title="Multi-level ordering"
+ratings ORDER BY userId, rating DESC;
+\`\`\`
+`
+ },
+ {
+ path: "sql/over.md",
+ title: "Over Keyword - Window Functions",
+ headers: ["Deep Dive: What is a Window Function?", "Syntax", "Supported functions", "Components of a window function", "Frame types and behavior", "Frame boundaries", "Exclusion options", "Notes and restrictions"],
+ content: `Window functions perform calculations across sets of table rows that are related to the current row. Unlike aggregate functions that return a single result for a group of rows, window functions return a value for every row while considering a window of rows defined by the OVER clause.
+
+We'll cover high-level, introductory information about window functions, and then move on to composition.
+
+We also have some [common examples](/docs/reference/function/window#common-window-function-examples) to get you started.
+
+:::tip
+Click _Demo this query_ within our query examples to see them in our live demo.
+:::
+
+## Deep Dive: What is a Window Function?
+
+A window function performs a calculation across a set of rows that are related
+to the current row. This set of related rows is called a "window", defined by an
+\`OVER\` clause that follows the window function.
+
+In practical terms, window functions are used when you need to perform a
+calculation that depends on a group of rows, but you want to retain the
+individual rows in the result set. This is different from aggregate functions
+like a cumulative \`sum\` or \`avg\`, which perform calculations on a group of rows
+and return a single result.
+
+The underlying mechanism of a window function involves three components:
+
+- **Partitioning:** The \`PARTITION BY\` clause divides the result set into
+ partitions (groups of rows) upon which the window function is applied. If no
+ partition is defined, the function treats all rows of the query result set as
+ a single partition.
+
+- **Ordering:** The \`ORDER BY\` clause within the \`OVER\` clause determines the
+ order of the rows in each partition.
+
+- **Frame Specification:** This defines the set of rows included in the window,
+ relative to the current row. For example,
+ \`ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW\` includes all rows from the
+ start of the partition to the current row.
+
+Use cases for window functions are vast.
+
+They are often used in analytics for tasks such as:
+
+- Calculating running totals or averages
+- Finding the maximum or minimum value in a sequence or partition
+- Ranking items within a specific category or partition
+- Calculating [moving averages](/docs/reference/function/window#avg) or
+ [cumulative sums](/docs/reference/function/window#cumulative-bid-size)
+
+Window functions are tough to grok.
+
+An analogy before we get to building:
+
+Imagine a group of cars in a race. Each car has a number, a name, and a finish
+time. If you wanted to know the average finish time, you could use an aggregate
+function like [\`avg()\`](/docs/reference/function/window#avg) to calculate it. But this would only give you a single
+result: the average time. You wouldn't know anything about individual cars'
+times.
+
+For example, a window function allows you to calculate the average finish time
+for all the cars (the window), but display it on each car (row), so you can
+compare this average to each car's average speed to see if they were faster or
+slower than the global average.
+
+So, in essence, window functions allow you to perform calculations that consider
+more than just the individual row or the entire table, but a 'window' of related
+rows. This 'window' could be all rows with the same value in a certain column,
+like all cars of the same engine size, or it could be a range of rows based on
+some order, like the three cars who finished before and after a certain car.
+
+This makes window functions incredibly powerful for complex calculations and
+analyses.
+
+## Syntax
+
+\`\`\`txt
+functionName OVER (
+ [PARTITION BY columnName [, ...]]
+ [ORDER BY columnName [ASC | DESC] [, ...]]
+ [ROWS | RANGE BETWEEN frame_start AND frame_end]
+ [EXCLUDE CURRENT ROW | EXCLUDE NO OTHERS]
+)
+\`\`\`
+Where:
+
+- \`functionName\`: The window function to apply (e.g., avg, sum, rank)
+- \`OVER\`: Specifies the window over which the function operates
+ - \`PARTITION BY\`: Divides the result set into partitions
+ - \`ORDER BY\`: Specifies the order of rows within each partition
+ - \`ROWS | RANGE BETWEEN\`: Defines the window frame relative to the current row
+ - \`EXCLUDE\`: Optionally excludes certain rows from the frame
+
+## Supported functions
+
+- [\`avg()\`](/docs/reference/function/window#avg) – Calculates the average within a window
+
+- [\`count()\`](/docs/reference/function/window#count) – Counts rows or non-null values
+
+- [\`dense_rank()\`](/docs/reference/function/window#dense_rank) – Assigns a rank to rows monotonically
+
+- [\`first_not_null_value()\`](/docs/reference/function/window#first_not_null_value) – Retrieves the first not null value in a window
+
+- [\`first_value()\`](/docs/reference/function/window#first_value) – Retrieves the first value in a window
+
+- [\`lag()\`](/docs/reference/function/window#lag) – Accesses data from previous rows
+
+- [\`last_value()\`](/docs/reference/function/window#last_value) – Retrieves the last value in a window
+
+- [\`lead()\`](/docs/reference/function/window#lead) – Accesses data from subsequent rows
+
+- [\`max()\`](/docs/reference/function/window#max) – Returns the maximum value within a window
+
+- [\`min()\`](/docs/reference/function/window#min) – Returns the minimum value within a window
+
+- [\`rank()\`](/docs/reference/function/window#rank) – Assigns a rank to rows
+
+- [\`row_number()\`](/docs/reference/function/window#row_number) – Assigns sequential numbers to rows
+
+- [\`sum()\`](/docs/reference/function/window#cumulative-bid-size) – Calculates the sum within a window
+
+## Components of a window function
+
+A window function calculates results across a set of rows related to the current row, called a window. This allows for complex calculations like moving averages, running totals, and rankings without collapsing rows.
+
+1. **Function Name**: Specifies the calculation to perform (e.g., \`avg(price)\`)
+2. **OVER Clause**: Defines the window for the function
+ - \`PARTITION BY\`: Divides the result set into partitions
+ - \`ORDER BY\`: Orders rows within partitions
+ - Frame Specification: Defines the subset of rows using ROWS or RANGE
+3. **Exclusion Option**: Excludes specific rows from the frame
+
+### Example
+
+\`\`\`questdb-sql title="Moving average example" demo
+SELECT
+ symbol,
+ price,
+ timestamp,
+ avg(price) OVER (
+ PARTITION BY symbol
+ ORDER BY timestamp
+ ROWS BETWEEN 3 PRECEDING AND CURRENT ROW
+ ) AS moving_avg
+FROM trades;
+\`\`\`
+
+This calculates a moving average of price over the current and three preceding rows for each symbol. For other
+common window function examples, please check the [Window functions reference](/docs/reference/function/window#common-window-function-examples).
+
+
+## Frame types and behavior
+
+Window frames specify which rows are included in the calculation relative to the current row.
+
+\`\`\`mermaid
+sequenceDiagram
+ participant R1 as Row at 09:00
+ participant R2 as Row at 09:02
+ participant R3 as Row at 09:03
+ participant R4 as Row at 09:04
(Current Row)
+
+ Note over R4: Calculating at 09:04
+
+ rect rgb(191, 223, 255)
+ Note over R2,R4: ROWS BETWEEN 2 PRECEDING AND CURRENT ROW
+ end
+
+ rect rgb(255, 223, 191)
+ Note over R3,R4: RANGE BETWEEN
'1' MINUTE PRECEDING
AND CURRENT ROW
+ end
+\`\`\`
+
+### ROWS frame
+
+Defines the frame based on a physical number of rows:
+
+\`\`\`txt
+ROWS BETWEEN 2 PRECEDING AND CURRENT ROW
+\`\`\`
+
+This includes the current row and two preceding rows.
+
+\`\`\`mermaid
+sequenceDiagram
+ participant R1 as Row 1
+ participant R2 as Row 2
+ participant R3 as Row 3
+ participant R4 as Row 4
+ participant R5 as Row 5
+
+ Note over R1: Frame includes Row1
+ Note over R2: Frame includes Row1, Row2
+ Note over R3: Frame includes Row1, Row2, Row3
+ Note over R4: Frame includes Row2, Row3, Row4
+ Note over R5: Frame includes Row3, Row4, Row5
+\`\`\`
+
+### RANGE frame
+
+:::note
+RANGE functions have a known issue. When using RANGE, all the rows with the same value will have the same output for the function. Read the [open issue](https://github.com/questdb/questdb/issues/5177) for more information.
+:::
+
+Defines the frame based on the actual values in the ORDER BY column, rather than counting rows. Unlike ROWS, which counts a specific number of rows, RANGE considers the values in the ORDER BY column to determine the window.
+
+Important requirements for RANGE:
+- Data must be ordered by the designated timestamp column
+- The window is calculated based on the values in that ORDER BY column
+
+For example, with a current row at 09:04 and \`RANGE BETWEEN '1' MINUTE PRECEDING AND CURRENT ROW\`:
+- Only includes rows with timestamps between 09:03 and 09:04 (inclusive)
+- Earlier rows (e.g., 09:00, 09:02) are excluded as they fall outside the 1-minute range
+
+\`\`\`mermaid
+sequenceDiagram
+ participant R1 as Row at 09:00
+ participant R2 as Row at 09:02
+ participant R3 as Row at 09:03
+ participant R4 as Row at 09:04
(Current Row)
+
+ Note over R4: Calculating at 09:04
+
+ %% Only include rows within 1 minute of current row (09:03-09:04)
+ rect rgba(255, 223, 191)
+ Note over R3,R4: RANGE BETWEEN
'1' MINUTE PRECEDING
AND CURRENT ROW
+ end
+
+ %% Show excluded rows in grey or with a visual indicator
+ Note over R1,R2: Outside 1-minute range
+\`\`\`
+
+The following time units can be used in RANGE window functions:
+
+- day
+- hour
+- minute
+- second
+- millisecond
+- microsecond
+
+Plural forms of these time units are also accepted (e.g., 'minutes', 'hours').
+
+\`\`\`questdb-sql title="Multiple time intervals example" demo
+SELECT
+ timestamp,
+ bid_px_00,
+ -- 5-minute average: includes rows from (current_timestamp - 5 minutes) to current_timestamp
+ AVG(bid_px_00) OVER (
+ ORDER BY timestamp
+ RANGE BETWEEN '5' MINUTE PRECEDING AND CURRENT ROW
+ ) AS avg_5min,
+ -- 100ms count: includes rows from (current_timestamp - 100ms) to current_timestamp
+ COUNT(*) OVER (
+ ORDER BY timestamp
+ RANGE BETWEEN '100' MILLISECOND PRECEDING AND CURRENT ROW
+ ) AS updates_100ms,
+ -- 2-second sum: includes rows from (current_timestamp - 2 seconds) to current_timestamp
+ SUM(bid_sz_00) OVER (
+ ORDER BY timestamp
+ RANGE BETWEEN '2' SECOND PRECEDING AND CURRENT ROW
+ ) AS volume_2sec
+FROM AAPL_orderbook
+WHERE bid_px_00 > 0
+LIMIT 10;
+\`\`\`
+
+This query demonstrates different time intervals in action, calculating:
+- 5-minute moving average of best bid price
+- Update frequency in 100ms windows
+- 2-second rolling volume
+
+Note that each window calculation is based on the timestamp values, not the number of rows. This means the number of rows included can vary depending on how many records exist within each time interval.
+
+## Frame boundaries
+
+Frame boundaries determine which rows are included in the window calculation:
+
+- \`UNBOUNDED PRECEDING\`: Starts at the first row of the partition
+- \` PRECEDING\`: Starts or ends at a specified number of rows or interval before the current row
+- \`CURRENT ROW\`: Starts or ends at the current row
+
+When the frame clause is not specified, the default frame is
+\`RANGE UNBOUNDED PRECEDING\`, which includes all rows from the start of the
+partition to the current row.
+
+- If \`ORDER BY\` is not present, the frame includes the entire partition, as all
+ rows are considered equal.
+
+- If \`ORDER BY\` is present, the frame includes all rows from the start of the
+ partition to the current row. Note that \`UNBOUNDED FOLLOWING\` is only allowed
+ when the frame start is \`UNBOUNDED PRECEDING\`, which means the frame includes
+ the entire partition.
+
+### Restrictions
+
+1. Frame start can only be:
+ - \`UNBOUNDED PRECEDING\`
+ - \` PRECEDING\`
+ - \`CURRENT ROW\`
+
+2. Frame end can only be:
+ - \`CURRENT ROW\`
+ - \` PRECEDING\` (unless start is \`UNBOUNDED PRECEDING\`)
+
+3. RANGE frames must be ordered by a designated timestamp
+
+## Exclusion options
+
+Modifies the window frame by excluding certain rows:
+
+### EXCLUDE NO OTHERS
+
+- Default behavior
+- Includes all rows in the frame
+
+\`\`\`mermaid
+sequenceDiagram
+ participant R1 as Row 1
+ participant R2 as Row 2
+ participant CR as Current Row
+ participant R4 as Row 4
+
+ rect rgba(255, 223, 191)
+ Note over R1,CR: Frame includes all rows from the frame start up to and including the current row
+ end
+\`\`\`
+
+### EXCLUDE CURRENT ROW
+
+- Excludes the current row from the frame
+- When frame ends at \`CURRENT ROW\`, end boundary automatically adjusts to \`1 PRECEDING\`
+- This automatic adjustment ensures that the current row is effectively excluded from the calculation, as there cannot be a frame that ends after the current row when the current row is excluded.
+
+\`\`\`mermaid
+sequenceDiagram
+ participant R1 as Row 1
+ participant R2 as Row 2
+ participant CR as Current Row
+ participant R4 as Row 4
+
+ rect rgba(255, 223, 191)
+ Note over R1,R2: Frame includes all rows
from the frame startup to one row
before the current row
(excluding the current row)
+ end
+ rect rgba(255, 0, 0, 0.1)
+ Note over CR: Current Row is excluded
+ end
+\`\`\`
+
+#### Example query
+
+To tie it together, consider the following example:
+
+\`\`\`questdb-sql title="EXCLUSION example" demo
+SELECT
+ timestamp,
+ price,
+ SUM(price) OVER (
+ ORDER BY timestamp
+ ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
+ EXCLUDE CURRENT ROW
+ ) AS cumulative_sum_excluding_current
+FROM trades;
+\`\`\`
+
+The query calculates a cumulative sum of the price column for each row in the trades table, excluding the current row from the calculation. By using \`EXCLUDE CURRENT ROW\`, the window frame adjusts to include all rows from the start up to one row before the current row. This demonstrates how the \`EXCLUDE CURRENT ROW\` option modifies the window frame to exclude the current row, affecting the result of the window function.
+
+
+
+## Notes and restrictions
+
+### ORDER BY behavior
+
+- ORDER BY in OVER clause determines the logical order for window functions
+- Independent of the query-level ORDER BY
+- Required for window-only functions
+- Required for RANGE frames
+
+### Frame specifications
+
+- ROWS frames:
+ - Based on physical row counts
+ - More efficient for large datasets
+ - Can be used with any ORDER BY column
+
+- RANGE frames:
+ - Defines the frame based on the actual values in the ORDER BY column, rather than counted row.
+ - Require ORDER BY on timestamp
+ - Support time-based intervals (e.g., '1h', '5m')
+
+### Exclusion behavior
+
+- Using \`EXCLUDE CURRENT ROW\` with frame end at \`CURRENT ROW\`:
+ - Automatically adjusts end boundary to \`1 PRECEDING\`
+ - Ensures consistent results across queries
+
+### Performance considerations
+
+- ROWS frames typically perform better than RANGE frames for large datasets
+- Partitioning can improve performance by processing smaller chunks of data
+- Consider index usage when ordering by timestamp columns
+
+### Common pitfalls
+
+#### Using window functions in WHERE clauses:
+
+\`\`\`questdb-sql title="Not allowed!"
+-- Incorrect usage
+SELECT
+ symbol,
+ price,
+ timestamp
+FROM trades
+WHERE
+ avg(price) OVER (ORDER BY timestamp) > 100;
+\`\`\`
+
+Instead, build like so:
+
+\`\`\`questdb-sql title="Correct usage" demo
+with prices_and_avg AS (
+SELECT
+ symbol,
+ price, avg(price) OVER (ORDER BY timestamp) as moving_avg_price,
+ timestamp
+FROM trades
+WHERE timestamp in yesterday()
+)
+select * from prices_and_avg
+WHERE
+ moving_avg_price > 100;
+\`\`\`
+
+#### Missing ORDER BY in OVER clause
+
+When no \`ORDER BY\` is specified, the average will be calculated for the whole
+partition. Given we don't have a PARTITION BY and we are using a global window,
+all the rows will show the same average. This is the average for the whole
+dataset.
+
+\`\`\`questdb-sql title="Missing ORDER BY"
+-- Potential issue
+SELECT
+ symbol,
+ price,
+ sum(price) OVER () AS cumulative_sum
+FROM trades;
+WHERE timestamp in yesterday();
+\`\`\`
+
+To compute the _moving average_, we need to specify an \`ORDER BY\` clause:
+
+\`\`\`questdb-sql title="Safer usage" demo
+SELECT
+ symbol,
+ price,
+ sum(price) OVER (ORDER BY TIMESTAMP) AS cumulative_sum
+FROM trades
+WHERE timestamp in yesterday();
+\`\`\`
+
+We may also have a case where all the rows for the same partition (symbol) will
+have the same average, if we include a \`PARTITION BY\` clause without an
+\`ORDER BY\` clause:
+
+\`\`\`questdb-sql title="Partitioned usage" demo
+-- Potential issue
+SELECT
+ symbol,
+ price,
+ sum(price) OVER (PARTITION BY symbol ) AS cumulative_sum
+FROM trades
+WHERE timestamp in yesterday();
+\`\`\`
+
+For every row to show the moving average for each symbol, we need to specify both
+an \`ORDER BY\` and a \`PARTITION BY\` clause:
+
+\`\`\`questdb-sql title="Partitioned and ordered usage" demo
+SELECT
+ symbol,
+ price,
+ sum(price) OVER (PARTITION BY symbol ORDER BY TIMESTAMP) AS cumulative_sum
+FROM trades
+WHERE timestamp in yesterday();
+\`\`\`
+`
+ },
+ {
+ path: "sql/overview.md",
+ title: "Query & SQL Overview",
+ headers: ["QuestDB Web Console", "PostgreSQL", "REST HTTP API", "Apache Parquet", "What's next?"],
+ content: `import Screenshot from "@theme/Screenshot"
+
+import Tabs from "@theme/Tabs"
+
+import TabItem from "@theme/TabItem"
+
+import CQueryPartial from "../../partials/\\_c.sql.query.partial.mdx"
+
+import CsharpQueryPartial from "../../partials/\\_csharp.sql.query.partial.mdx"
+
+import GoQueryPartial from "../../partials/\\_go.sql.query.partial.mdx"
+
+import JavaQueryPartial from "../../partials/\\_java.sql.query.partial.mdx"
+
+import NodeQueryPartial from "../../partials/\\_nodejs.sql.query.partial.mdx"
+
+import RubyQueryPartial from "../../partials/\\_ruby.sql.query.partial.mdx"
+
+import PHPQueryPartial from "../../partials/\\_php.sql.query.partial.mdx"
+
+import PythonQueryPartial from "../../partials/\\_python.sql.query.partial.mdx"
+
+import CurlExecQueryPartial from "../../partials/\\_curl.exec.query.partial.mdx"
+
+import GoExecQueryPartial from "../../partials/\\_go.exec.query.partial.mdx"
+
+import NodejsExecQueryPartial
+from"../../partials/\\_nodejs.exec.query.partial.mdx"
+
+import PythonExecQueryPartial from
+"../../partials/\\_python.exec.query.partial.mdx"
+
+Querying - as a base action - is performed in three primary ways:
+
+1. Query via the
+ [QuestDB Web Console](/docs/reference/sql/overview/#questdb-web-console)
+2. Query via [PostgreSQL](/docs/reference/sql/overview/#postgresql)
+3. Query via [REST HTTP API](/docs/reference/sql/overview/#rest-http-api)
+4. Query via [Apache Parquet](/docs/reference/sql/overview/#apache-parquet)
+
+For efficient and clear querying, QuestDB provides SQL with enhanced time series
+extensions. This makes analyzing, downsampling, processing and reading time
+series data an intuitive and flexible experience.
+
+Queries can be written into many applications using existing drivers and clients
+of the PostgreSQL or REST-ful ecosystems. However, querying is also leveraged
+heavily by third-party tools to provide visualizations, such as within
+[Grafana](/docs/third-party-tools/grafana/), or for connectivity into broad data
+infrastructure and application environments such as with a tool like
+[Cube](/docs/third-party-tools/cube/).
+
+> Need to ingest data first? Checkout our
+> [Ingestion overview](/docs/ingestion-overview/).
+
+## QuestDB Web Console
+
+The Web Console is available by default at
+[localhost:9000](http://localhost:9000). The GUI makes it easy to write, return
+and chart queries. There is autocomplete, syntax highlighting, errors, and more.
+If you want to test a query or interact direclty with your data in the cleanest
+and simplest way, apply queries via the [Web Console](/docs/web-console/).
+
+
+
+For an example, click _Demo this query_ in the below snippet. This will run a
+query within our public demo instance and [Web Console](/docs/web-console/):
+
+\`\`\`questdb-sql title='Navigate time with SQL' demo
+SELECT
+ timestamp, symbol,
+ first(price) AS open,
+ last(price) AS close,
+ min(price),
+ max(price),
+ sum(amount) AS volume
+FROM trades
+WHERE timestamp > dateadd('d', -1, now())
+SAMPLE BY 15m;
+\`\`\`
+
+If you see _Demo this query_ on other snippets in this docs, they can be run
+against the demo instance.
+
+## PostgreSQL
+
+Query QuestDB using the PostgreSQL endpoint via the default port \`8812\`.
+
+See [PGWire Client overview](/docs/pgwire/pgwire-intro/) for details on how to
+connect to QuestDB using PostgreSQL clients.
+
+Brief examples in multiple languages are shown below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#### Further Reading
+
+See the [PGWire Client overview](/docs/pgwire/pgwire-intro/) for more details on how to use PostgreSQL
+clients to connect to QuestDB.
+
+## REST HTTP API
+
+QuestDB exposes a REST API for compatibility with a wide range of libraries and
+tools.
+
+The REST API is accessible on port \`9000\` and has the following query-capable
+entrypoints:
+
+For details such as content type, query parameters and more, refer to the
+[REST HTTP API](/docs/reference/api/rest/) reference.
+
+| Entrypoint | HTTP Method | Description | REST HTTP API Reference |
+| :------------------------------------------ | :---------- | :-------------------------------------- | :------------------------------------------------------------ |
+| [\`/exp?query=..\`](#exp-sql-query-to-csv) | GET | Export SQL Query as CSV | [Reference](/docs/reference/api/rest/#exp---export-data) |
+| [\`/exec?query=..\`](#exec-sql-query-to-json) | GET | Run SQL Query returning JSON result set | [Reference](/docs/reference/api/rest/#exec---execute-queries) |
+
+#### \`/exp\`: SQL Query to CSV
+
+The \`/exp\` entrypoint allows querying the database with a SQL select query and
+obtaining the results as CSV.
+
+For obtaining results in JSON, use \`/exec\` instead, documented next.
+
+
+
+
+
+\`\`\`bash
+curl -G --data-urlencode \\
+ "query=SELECT * FROM example_table2 LIMIT 3" \\
+ http://localhost:9000/exp
+\`\`\`
+
+\`\`\`csv
+"col1","col2","col3"
+"a",10.5,true
+"b",100.0,false
+"c",,true
+\`\`\`
+
+
+
+
+
+\`\`\`python
+import requests
+
+resp = requests.get(
+ 'http://localhost:9000/exp',
+ {
+ 'query': 'SELECT * FROM example_table2',
+ 'limit': '3,6' # Rows 3, 4, 5
+ })
+print(resp.text)
+\`\`\`
+
+\`\`\`csv
+"col1","col2","col3"
+"d",20.5,true
+"e",200.0,false
+"f",,true
+\`\`\`
+
+
+
+
+
+#### \`/exec\`: SQL Query to JSON
+
+The \`/exec\` entrypoint takes a SQL query and returns results as JSON.
+
+This is similar to the \`/exp\` entry point which returns results as CSV.
+
+##### Querying Data
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Alternatively, the \`/exec\` endpoint can be used to create a table and the
+\`INSERT\` statement can be used to populate it with values:
+
+
+
+
+
+\`\`\`shell
+# Create Table
+curl -G \\
+ --data-urlencode "query=CREATE TABLE IF NOT EXISTS trades(name VARCHAR, value INT)" \\
+ http://localhost:9000/exec
+
+# Insert a row
+curl -G \\
+ --data-urlencode "query=INSERT INTO trades VALUES('abc', 123456)" \\
+ http://localhost:9000/exec
+
+# Update a row
+curl -G \\
+ --data-urlencode "query=UPDATE trades SET value = 9876 WHERE name = 'abc'" \\
+ http://localhost:9000/exec
+\`\`\`
+
+
+
+
+
+The \`node-fetch\` package can be installed using \`npm i node-fetch\`.
+
+\`\`\`javascript
+const fetch = require("node-fetch");
+
+const HOST = "http://localhost:9000";
+
+async function createTable() {
+ try {
+ const query = "CREATE TABLE IF NOT EXISTS trades (name VARCHAR, value INT)";
+
+ const response = await fetch(
+ \`\${HOST}/exec?query=\${encodeURIComponent(query)}\`,
+ );
+ const json = await response.json();
+
+ console.log(json);
+ } catch (error) {
+ console.log(error);
+ }
+}
+
+async function insertData() {
+ try {
+ const query = "INSERT INTO trades VALUES('abc', 123456)";
+
+ const response = await fetch(
+ \`\${HOST}/exec?query=\${encodeURIComponent(query)}\`,
+ );
+ const json = await response.json();
+
+ console.log(json);
+ } catch (error) {
+ console.log(error);
+ }
+}
+
+async function updateData() {
+ try {
+ const query = "UPDATE trades SET value = 9876 WHERE name = 'abc'";
+
+ const response = await fetch(
+ \`\${HOST}/exec?query=\${encodeURIComponent(query)}\`,
+ );
+ const json = await response.json();
+
+ console.log(json);
+ } catch (error) {
+ console.log(error);
+ }
+}
+
+createTable().then(insertData).then(updateData);
+\`\`\`
+
+
+
+
+
+\`\`\`python
+import requests
+import json
+
+host = 'http://localhost:9000'
+
+def run_query(sql_query):
+ query_params = {'query': sql_query, 'fmt' : 'json'}
+ try:
+ response = requests.get(host + '/exec', params=query_params)
+ json_response = json.loads(response.text)
+ print(json_response)
+ except requests.exceptions.RequestException as e:
+ print("Error: %s" % (e))
+
+# create table
+run_query("CREATE TABLE IF NOT EXISTS trades (name VARCHAR, value INT)")
+# insert row
+run_query("INSERT INTO trades VALUES('abc', 123456)")
+# update row
+run_query("UPDATE trades SET value = 9876 WHERE name = 'abc'")
+\`\`\`
+
+
+
+
+
+## Apache Parquet
+
+:::info
+
+Apache Parquet support is in **beta**. It may not be fit for production use.
+
+Please let us know if you run into issues. Either:
+
+1. Email us at [support@questdb.io](mailto:support@questdb.io)
+2. Join our [public Slack](https://slack.questdb.com/)
+3. Post on our [Discourse community](https://community.questdb.com/)
+
+:::
+
+Parquet files can be read and thus queried by QuestDB.
+
+QuestDB is shipped with a demo Parquet file, \`trades.parquet\`, which can be
+queried using the \`read_parquet\` function.
+
+Example:
+
+\`\`\`questdb-sql title="read_parquet example"
+SELECT
+ *
+FROM
+ read_parquet('trades.parquet')
+WHERE
+ side = 'buy';
+\`\`\`
+
+The trades.parquet file is located in the \`import\` subdirectory inside the
+QuestDB root directory. Drop your own Parquet files to the import directory and
+query them using the \`read_parquet()\` function.
+
+You can change the allowed directory by setting the \`cairo.sql.copy.root\`
+configuration key.
+
+For more information, see the
+[Parquet documentation](/docs/reference/function/parquet/).
+
+## What's next?
+
+Now... SQL! It's query time.
+
+Whether you want to use the [Web Console](/docs/web-console/), PostgreSQL or REST HTTP (or both),
+query construction is rich.
+
+To brush up and learn what's unique in QuestDB, consider the following:
+
+- [Data types](/docs/reference/sql/datatypes/)
+- [SQL execution order](/docs/concept/sql-execution-order/)
+
+And to learn about some of our favourite, most powerful syntax:
+
+- [Window functions](/docs/reference/function/window/) are a powerful analysis
+ tool
+- [Aggregate functions](/docs/reference/function/aggregation/) - aggregations
+ are key!
+- [Date & time operators](/docs/reference/operators/date-time/) to learn about
+ date and time
+- [\`SAMPLE BY\`](/docs/reference/sql/sample-by/) to summarize data into chunks
+ based on a specified time interval, from a year to a microsecond
+- [\`WHERE IN\`](/docs/reference/sql/where/#time-range-where-in) to compress time ranges
+ into concise intervals
+- [\`LATEST ON\`](/docs/reference/sql/latest-on/) for latest values within
+ multiple series within a table
+- [\`ASOF JOIN\`](/docs/reference/sql/asof-join/) to associate timestamps between
+ a series based on proximity; no extra indices required
+- [Materialized Views](/docs/guides/mat-views/) to pre-compute complex queries
+ for optimal performance
+
+Looking for visuals?
+
+- Explore [Grafana](/docs/third-party-tools/grafana/)
+- Jump quickly into the [Web Console](/docs/web-console/)
+`
+ },
+ {
+ path: "sql/refresh-mat-view.md",
+ title: "REFRESH MATERIALIZED VIEW",
+ headers: ["Syntax", "See also"],
+ content: `:::info
+
+Materialized View support is now generally available (GA) and ready for
+production use.
+
+If you are using versions earlier than \`8.3.1\`, we suggest you upgrade at your
+earliest convenience.
+
+:::
+
+\`REFRESH MATERIALIZED VIEW\` refreshes a materialized view. This is helpful when
+a view becomes invalid, and no longer refreshes incrementally.
+
+When the \`FULL\` keyword is specified, this command deletes the data in the
+target materialized view and inserts the results of the query into the view. It
+also marks the materialized view as valid, reactivating the incremental refresh
+processes.
+
+When the \`INCREMENTAL\` keyword is used, the \`REFRESH\` command schedules an
+incremental refresh of the materialized view. Usually, incremental refresh is
+automatic, so this command is useful only in niche situations when incremental
+refresh is not working as expected, but the view is still valid.
+
+When the \`RANGE\` keyword is specified, this command refreshes the data in the
+specified time range only. This command is useful for a valid materialized
+view with configured
+[\`REFRESH LIMIT\`](/docs/reference/sql/alter-mat-view-set-refresh-limit/). That's
+because inserted base table rows with timestamps older than the refresh limit
+are ignored by incremental refresh, so range refresh may be used to
+recalculate materialized view on older rows. Range refresh does not affect
+incremental refresh, e.g. it does not update the last base table transaction
+used by incremental refresh.
+
+## Syntax
+
+
+
+## Examples
+
+\`\`\`questdb-sql
+REFRESH MATERIALIZED VIEW trades_1h FULL;
+\`\`\`
+
+\`\`\`questdb-sql
+REFRESH MATERIALIZED VIEW trades_1h INCREMENTAL;
+\`\`\`
+
+\`\`\`questdb-sql
+REFRESH MATERIALIZED VIEW trades_1h RANGE FROM '2025-05-05T01:00:00.000000Z' TO '2025-05-05T02:00:00.000000Z';
+\`\`\`
+
+## See also
+
+For more information on the concept, see the the
+[introduction](/docs/concept/mat-views/) and [guide](/docs/guides/mat-views/) on
+materialized views.
+`
+ },
+ {
+ path: "sql/reindex.md",
+ title: "REINDEX",
+ headers: ["Syntax", "Options"],
+ content: `Rebuilds one or more [index](/docs/concept/indexes/) columns of the given table.
+This operation is intended to be used after a hardware or software crash, when
+the index data are corrupted and the table cannot be opened for writes.
+
+The operation can only be performed when there is no other reader and writer
+working on the table. During the operation, the table is locked and no read and
+write should be performed on the selected table.
+
+## Syntax
+
+
+
+## Options
+
+By default, \`REINDEX\` rebuilds all indexes in the selected table. The following
+options can be used to narrow down the scope of the operation:
+
+- \`COLUMN\`: When defined, \`REINDEX\` rebuilds the index for the selected column.
+- \`PARTITION\`: When defined, \`REINDEX\` rebuilds index files in the selected
+ partition only. The partition name must match the name of the directory for
+ the given partition. The naming convention is detailed in
+ [Partitions](/docs/concept/partitions/).
+
+## Example
+
+Rebuilding all the indexes in the table \`trades\`:
+
+\`\`\`questdb-sql title="Rebuilding an index"
+REINDEX TABLE trades LOCK EXCLUSIVE;
+\`\`\`
+
+Rebuilding the index in the column \`instruments\`:
+
+\`\`\`questdb-sql title="Rebuilding an index"
+REINDEX TABLE trades COLUMN instruments LOCK EXCLUSIVE;
+\`\`\`
+
+Rebuilding one partition (\`2021-12-17\`) of the index in the column
+\`instruments\`:
+
+\`\`\`questdb-sql title="Rebuilding an index"
+REINDEX TABLE trades COLUMN instruments PARTITION '2021-12-17' LOCK EXCLUSIVE;
+\`\`\`
+`
+ },
+ {
+ path: "sql/rename.md",
+ title: "RENAME TABLE keyword",
+ headers: ["Syntax"],
+ content: `\`RENAME TABLE\` is used to change the name of a table.
+
+## Syntax
+
+
+
+## Example
+
+\`\`\`questdb-sql
+RENAME TABLE 'test.csv' TO 'myTable';
+\`\`\`
+`
+ },
+ {
+ path: "sql/sample-by.md",
+ title: "SAMPLE BY keyword",
+ headers: ["Syntax", "Sample units", "FROM-TO", "Fill options", "Sample calculation", "ALIGN TO FIRST OBSERVATION", "ALIGN TO CALENDAR", "Performance optimization", "See also"],
+ content: `\`SAMPLE BY\` is used on [time-series data](/blog/what-is-time-series-data/) to summarize large datasets into
+aggregates of homogeneous time chunks as part of a
+[SELECT statement](/docs/reference/sql/select/).
+
+To use \`SAMPLE BY\`, a table column needs to be specified as a
+[designated timestamp](/docs/concept/designated-timestamp/).
+
+Users performing \`SAMPLE BY\` queries on datasets **with missing data** may make
+use of the [FILL](#fill-options) keyword to specify a fill behavior.
+
+## Syntax
+
+### SAMPLE BY keywords
+
+
+
+### FROM-TO keywords
+
+
+
+### FILL keywords
+
+
+
+### ALIGN TO keywords
+
+
+
+## Sample units
+
+The size of sampled groups are specified with the following syntax:
+
+\`\`\`questdb-sql
+SAMPLE BY n{units}
+\`\`\`
+
+Where the unit for sampled groups may be one of the following:
+
+| unit | description |
+| ---- | ----------- |
+| \`U\` | microsecond |
+| \`T\` | millisecond |
+| \`s\` | second |
+| \`m\` | minute |
+| \`h\` | hour |
+| \`d\` | day |
+| \`M\` | month |
+| \`y\` | year |
+
+For example, given a table \`trades\`, the following query returns the number of
+trades per hour:
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM trades
+SAMPLE BY 1h;
+\`\`\`
+
+## FROM-TO
+
+:::note
+
+Versions prior to QuestDB 8.1.0 do not have access to this extension.
+
+Please see the new blog for more information.
+
+:::
+
+When using \`SAMPLE BY\` with \`FILL\`, you can fill missing rows within the result set with pre-determined values.
+
+However, this method will only fill rows between existing data in the data set and cannot fill rows outside of this range.
+
+To fill outside the bounds of the existing data, you can specify a fill range using a \`FROM-TO\` clause. The boundary
+timestamps are expected in UTC.
+
+Note that \`FROM-TO\` clause can be used only on non-keyed SAMPLE BY queries, i.e. queries that have no grouping columns
+other than the timestamp.
+
+#### Syntax
+
+Specify the shape of the query using \`FROM\` and \`TO\`:
+
+\`\`\`questdb-sql title='Pre-filling trip data' demo
+SELECT pickup_datetime as t, count()
+FROM trips
+SAMPLE BY 1d FROM '2008-12-28' TO '2009-01-05' FILL(NULL);
+\`\`\`
+
+Since no rows existed before 2009, QuestDB automatically fills in these rows.
+
+This is distinct from the \`WHERE\` clause with a simple rule of thumb -
+\`WHERE\` controls what data flows in, \`FROM-TO\` controls what data flows out.
+
+Use both \`FROM\` and \`TO\` in isolation to pre-fill or post-fill data. If \`FROM\` is not provided, then the lower bound is the start of the dataset, aligned to calendar. The opposite is true omitting \`TO\`.
+
+#### \`WHERE\` clause optimisation
+
+If the user does not provide a \`WHERE\` clause, or the \`WHERE\` clause does not consider the designated timestamp,
+QuestDB will add one for you, matching the \`FROM-TO\` interval.
+
+This means that the query will run optimally, and avoid touching data not relevant to the result.
+
+Therefore, we compile the prior query into something similar to this:
+
+\`\`\`questdb-sql title='Pre-filling trip data with WHERE optimisation' demo
+SELECT pickup_datetime as t, count()
+FROM trips
+WHERE pickup_datetime >= '2008-12-28'
+ AND pickup_datetime < '2009-01-05'
+SAMPLE BY 1d FROM '2008-12-28' TO '2009-01-05' FILL(NULL);
+\`\`\`
+
+#### Limitations
+
+Here are the current limits to this feature.
+
+- This syntax is not compatible with \`FILL(PREV)\` or \`FILL(LINEAR)\`.
+- This syntax is for \`ALIGN TO CALENDAR\` only (default alignment).
+- Does not consider any specified \`OFFSET\`.
+- This syntax is for non-keyed \`SAMPLE BY\` i.e. only designated timestamp and aggregate columns.
+
+## Fill options
+
+The \`FILL\` keyword is optional and expects one or more \`fillOption\` strategies
+which will be applied to one or more aggregate columns. The following
+restrictions apply:
+
+- Keywords denoting fill strategies may not be combined. Only one option from
+ \`NONE\`, \`NULL\`, \`PREV\`, \`LINEAR\` and constants may be used.
+- \`LINEAR\` strategy is not supported for keyed queries, i.e. queries that
+ contain non-aggregated columns other than the timestamp in the SELECT clause.
+- The \`FILL\` keyword must precede alignment described in the
+ [sample calculation section](#sample-calculation), i.e.:
+
+\`\`\`questdb-sql
+SELECT ts, max(price) max
+FROM prices
+SAMPLE BY 1h FILL(LINEAR)
+ALIGN TO ...
+\`\`\`
+
+| fillOption | Description |
+| ---------- | ------------------------------------------------------------------------------------------------------------------------- |
+| \`NONE\` | No fill applied. If there is no data, the time sample will be skipped in the results. A table could be missing intervals. |
+| \`NULL\` | Fills with \`NULL\` values. |
+| \`PREV\` | Fills using the previous value. |
+| \`LINEAR\` | Fills by linear interpolation of the 2 surrounding points. |
+| \`x\` | Fills with a constant value - where \`x\` is the desired value, for example \`FILL(100.05)\`. |
+
+Consider an example table named \`prices\` which has no records during the entire
+third hour (\`2021-01-01T03\`):
+
+| ts | price |
+| --------------------------- | ----- |
+| 2021-01-01T01:00:00.000000Z | p1 |
+| 2021-01-01T02:00:00.000000Z | p2 |
+| 2021-01-01T04:00:00.000000Z | p4 |
+| 2021-01-01T05:00:00.000000Z | p5 |
+
+The following query returns the maximum price per hour. As there are missing
+values, an aggregate cannot be calculated:
+
+\`\`\`questdb-sql
+SELECT ts, max(price) max
+FROM prices
+SAMPLE BY 1h;
+\`\`\`
+
+A row is missing for the \`2021-01-01T03:00:00.000000Z\` sample:
+
+| ts | max |
+| --------------------------- | ---- |
+| 2021-01-01T01:00:00.000000Z | max1 |
+| 2021-01-01T02:00:00.000000Z | max2 |
+| 2021-01-01T04:00:00.000000Z | max4 |
+| 2021-01-01T05:00:00.000000Z | max5 |
+
+A \`FILL\` strategy can be employed which fills with the previous value using
+\`PREV\`:
+
+\`\`\`questdb-sql
+SELECT ts, max(price) max
+FROM prices
+SAMPLE BY 1h FILL(PREV);
+\`\`\`
+
+| ts | max |
+| ------------------------------- | -------- |
+| 2021-01-01T01:00:00.000000Z | max1 |
+| 2021-01-01T02:00:00.000000Z | max2 |
+| **2021-01-01T03:00:00.000000Z** | **max2** |
+| 2021-01-01T04:00:00.000000Z | max4 |
+| 2021-01-01T05:00:00.000000Z | max5 |
+
+Linear interpolation is done using the \`LINEAR\` fill option:
+
+\`\`\`questdb-sql
+SELECT ts, max(price) max
+FROM prices
+SAMPLE BY 1h FILL(LINEAR);
+\`\`\`
+
+| ts | max |
+| ------------------------------- | ----------------- |
+| 2021-01-01T01:00:00.000000Z | max1 |
+| 2021-01-01T02:00:00.000000Z | max2 |
+| **2021-01-01T03:00:00.000000Z** | **(max2+max4)/2** |
+| 2021-01-01T04:00:00.000000Z | max4 |
+| 2021-01-01T05:00:00.000000Z | max5 |
+
+A constant value can be used as a \`fillOption\`:
+
+\`\`\`questdb-sql
+SELECT ts, max(price) max
+FROM prices
+SAMPLE BY 1h FILL(100.5);
+\`\`\`
+
+| ts | max |
+| ------------------------------- | --------- |
+| 2021-01-01T01:00:00.000000Z | max1 |
+| 2021-01-01T02:00:00.000000Z | max2 |
+| **2021-01-01T03:00:00.000000Z** | **100.5** |
+| 2021-01-01T04:00:00.000000Z | max4 |
+| 2021-01-01T05:00:00.000000Z | max5 |
+
+Finally, \`NULL\` may be used as a \`fillOption\`:
+
+\`\`\`questdb-sql
+SELECT ts, max(price) max
+FROM prices
+SAMPLE BY 1h FILL(NULL);
+\`\`\`
+
+| ts | max |
+| ------------------------------- | -------- |
+| 2021-01-01T01:00:00.000000Z | max1 |
+| 2021-01-01T02:00:00.000000Z | max2 |
+| **2021-01-01T03:00:00.000000Z** | **null** |
+| 2021-01-01T04:00:00.000000Z | max4 |
+| 2021-01-01T05:00:00.000000Z | max5 |
+
+### Multiple fill values
+
+\`FILL()\` accepts a list of values where each value corresponds to a single
+aggregate column in the SELECT clause order:
+
+\`\`\`questdb-sql
+SELECT min(price), max(price), avg(price), ts
+FROM prices
+SAMPLE BY 1h
+FILL(NULL, 10, PREV);
+\`\`\`
+
+In the above query \`min(price)\` aggregate will get \`FILL(NULL)\` strategy
+applied, \`max(price)\` will get \`FILL(10)\`, and \`avg(price)\` will get
+\`FILL(PREV)\`.
+
+## Sample calculation
+
+The default time calculation of sampled groups is an absolute value, in other
+words, sampling by one day is a 24 hour range which is not bound to calendar
+dates. To align sampled groups to calendar dates, the \`ALIGN TO\` keywords can be
+used and are described in the [ALIGN TO CALENDAR](#align-to-calendar) section
+below.
+
+:::note
+
+Since QuestDB v7.4.0, the default behaviour for \`ALIGN TO\` has changed. If you do not specify
+an explicit alignment, \`SAMPLE BY\` expressions will use \`ALIGN TO CALENDAR\` behaviour.
+
+The prior default behaviour can be retained by specifying \`ALIGN TO FIRST OBSERVATION\` on a \`SAMPLE BY\` query.
+
+Alternatively, one can set the \`cairo.sql.sampleby.default.alignment.calendar\` option to \`false\` in \`server.conf\`.
+
+:::
+
+## ALIGN TO FIRST OBSERVATION
+
+Consider a table \`sensors\` with the following data spanning three calendar days:
+
+\`\`\`questdb-sql
+CREATE TABLE sensors (
+ ts TIMESTAMP,
+ val INT
+) TIMESTAMP(ts) PARTITION BY DAY WAL;
+
+INSERT INTO sensors (ts, val) VALUES
+ ('2021-05-31T23:10:00.000000Z', 10),
+ ('2021-06-01T01:10:00.000000Z', 80),
+ ('2021-06-01T07:20:00.000000Z', 15),
+ ('2021-06-01T13:20:00.000000Z', 10),
+ ('2021-06-01T19:20:00.000000Z', 40),
+ ('2021-06-02T01:10:00.000000Z', 90),
+ ('2021-06-02T07:20:00.000000Z', 30);
+\`\`\`
+
+The following query can be used to sample the table by day.
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1d
+ALIGN TO FIRST OBSERVATION;
+\`\`\`
+
+This query will return two rows:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-05-31T23:10:00.000000Z | 5 |
+| 2021-06-01T23:10:00.000000Z | 2 |
+
+The timestamp value for the 24 hour groups start at the first-observed
+timestamp, and continue in \`1d\` intervals.
+
+## ALIGN TO CALENDAR
+
+The default behaviour for SAMPLE BY, this option aligns data to calendar dates, with two optional parameters:
+
+- [TIME ZONE](#time-zone)
+- [WITH OFFSET](#with-offset)
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1d;
+\`\`\`
+
+or:
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1d
+ALIGN TO CALENDAR;
+\`\`\`
+
+Gives the following result:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-05-31T00:00:00.000000Z | 1 |
+| 2021-06-01T00:00:00.000000Z | 4 |
+| 2021-06-02T00:00:00.000000Z | 2 |
+
+In this case, the timestamps are floored to the nearest UTC day, and grouped. The counts correspond
+to the number of entries occurring within each UTC day.
+
+This is particularly useful for summarising data for charting purposes; see the [candlestick chart](https://dashboard.questdb.io/d-solo/fb13b4ab-b1c9-4a54-a920-b60c5fb0363f/public-dashboard-questdb-io-use-cases-crypto?orgId=1&refresh=750ms&panelId=6) from the example [crypto dashboard](https://questdb.com/dashboards/crypto/).
+
+### TIME ZONE
+
+A time zone may be provided for sampling with calendar alignment. Details on the
+options for specifying time zones with available formats are provided in the
+guide for
+[working with timestamps and time zones](/docs/guides/working-with-timestamps-timezones/).
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1d
+ALIGN TO CALENDAR TIME ZONE 'Europe/Berlin';
+\`\`\`
+
+In this case, the 24 hour samples begin at \`2021-05-31T22:00:00.000000Z\`:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-05-31T22:00:00.000000Z | 5 |
+| 2021-06-01T22:00:00.000000Z | 2 |
+
+Additionally, an offset may be applied when aligning sample calculation to
+calendar
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1d
+ALIGN TO CALENDAR TIME ZONE 'Europe/Berlin' WITH OFFSET '00:45';
+\`\`\`
+
+In this case, the 24 hour samples begin at \`2021-05-31T22:45:00.000000Z\`:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-05-31T22:45:00.000000Z | 5 |
+| 2021-06-01T22:45:00.000000Z | 1 |
+
+#### Local timezone output
+
+The timestamp values output from \`SAMPLE BY\` queries is in UTC. To have UTC
+values converted to specific timezones, the
+[to_timezone() function](/docs/reference/function/date-time/#to_timezone) should
+be used.
+
+\`\`\`questdb-sql
+SELECT to_timezone(ts, 'PST') ts, count
+FROM (
+ SELECT ts, count()
+ FROM sensors
+ SAMPLE BY 2h
+ ALIGN TO CALENDAR TIME ZONE 'PST'
+);
+\`\`\`
+
+#### Time zone transitions
+
+Calendar dates may contain historical time zone transitions or may vary in the
+total number of hours due to daylight savings time. Considering the 31st October
+2021, in the \`Europe/London\` calendar day which consists of 25 hours:
+
+> - Sunday, 31 October 2021, 02:00:00 clocks are turned backward 1 hour to
+> - Sunday, 31 October 2021, 01:00:00 local standard time
+
+When a \`SAMPLE BY\` operation crosses time zone transitions in cases such as
+this, the first sampled group which spans a transition will include aggregates
+by full calendar range. Consider a table \`sensors\` with one data point per hour
+spanning five calendar hours:
+
+| ts | val |
+| --------------------------- | --- |
+| 2021-10-31T00:10:00.000000Z | 10 |
+| 2021-10-31T01:10:00.000000Z | 20 |
+| 2021-10-31T02:10:00.000000Z | 30 |
+| 2021-10-31T03:10:00.000000Z | 40 |
+| 2021-10-31T04:10:00.000000Z | 50 |
+
+The following query will sample by hour with the \`Europe/London\` time zone and
+align to calendar ranges:
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1h
+ALIGN TO CALENDAR TIME ZONE 'Europe/London';
+\`\`\`
+
+The record count for the hour which encounters a time zone transition will
+contain two records for both hours at the time zone transition:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-10-31T00:00:00.000000Z | 2 |
+| 2021-10-31T01:00:00.000000Z | 1 |
+| 2021-10-31T02:00:00.000000Z | 1 |
+| 2021-10-31T03:00:00.000000Z | 1 |
+
+Similarly, given one data point per hour on this table, running \`SAMPLE BY 1d\`
+will have a count of \`25\` for this day when aligned to calendar time zone
+\`Europe/London\`.
+
+### WITH OFFSET
+
+Aligning sampling calculation can be provided an arbitrary offset in the format
+\`'+/-HH:mm'\`, for example:
+
+- \`'00:30'\` plus thirty minutes
+- \`'+00:30'\` plus thirty minutes
+- \`'-00:15'\` minus 15 minutes
+
+The query uses the default offset '00:00' if the parameter is not set.
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1d
+ALIGN TO CALENDAR WITH OFFSET '02:00';
+\`\`\`
+
+In this case, the 24 hour samples begin at \`2021-05-31T02:00:00.000000Z\`:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-05-31T02:00:00.000000Z | 2 |
+| 2021-06-01T02:00:00.000000Z | 4 |
+| 2021-06-02T02:00:00.000000Z | 1 |
+
+### TIME ZONE WITH OFFSET
+
+The \`TIME ZONE\` and \`WITH OFFSET\` options can be combined.
+
+\`\`\`questdb-sql
+SELECT ts, count()
+FROM sensors
+SAMPLE BY 1h
+ALIGN TO CALENDAR TIME ZONE 'Europe/London' WITH OFFSET '02:00';
+\`\`\`
+
+The sample then begins from \`Europe/London\` at \`2021-10-31T02:00:00.000000Z\`:
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-10-31T02:00:00.000000Z | 1 |
+| 2021-10-31T03:00:00.000000Z | 1 |
+| 2021-10-31T04:00:00.000000Z | 3 |
+| 2021-10-31T05:00:00.000000Z | 2 |
+
+## Examples
+
+Assume the following table \`trades\`:
+
+| ts | quantity | price |
+| --------------------------- | -------- | ------ |
+| 2021-05-31T23:45:10.000000Z | 10 | 100.05 |
+| 2021-06-01T00:01:33.000000Z | 5 | 100.05 |
+| 2021-06-01T00:15:14.000000Z | 200 | 100.15 |
+| 2021-06-01T00:30:40.000000Z | 300 | 100.15 |
+| 2021-06-01T00:45:20.000000Z | 10 | 100 |
+| 2021-06-01T01:00:50.000000Z | 50 | 100.15 |
+
+This query will return the number of trades per hour:
+
+\`\`\`questdb-sql title="Hourly interval"
+SELECT ts, count()
+FROM trades
+SAMPLE BY 1h;
+\`\`\`
+
+| ts | count |
+| --------------------------- | ----- |
+| 2021-05-31T23:45:10.000000Z | 3 |
+| 2021-06-01T00:45:10.000000Z | 1 |
+| 2021-05-31T23:45:10.000000Z | 1 |
+| 2021-06-01T00:45:10.000000Z | 1 |
+
+The following will return the trade volume in 30 minute intervals
+
+\`\`\`questdb-sql title="30 minute interval"
+SELECT ts, sum(quantity*price)
+FROM trades
+SAMPLE BY 30m;
+\`\`\`
+
+| ts | sum |
+| --------------------------- | ------ |
+| 2021-05-31T23:45:10.000000Z | 1000.5 |
+| 2021-06-01T00:15:10.000000Z | 16024 |
+| 2021-06-01T00:45:10.000000Z | 8000 |
+| 2021-06-01T00:15:10.000000Z | 8012 |
+| 2021-06-01T00:45:10.000000Z | 8000 |
+
+The following will return the average trade notional (where notional is = q \\*
+p) by day:
+
+\`\`\`questdb-sql title="Daily interval"
+SELECT ts, avg(quantity*price)
+FROM trades
+SAMPLE BY 1d;
+\`\`\`
+
+| ts | avg |
+| --------------------------- | ----------------- |
+| 2021-05-31T23:45:10.000000Z | 6839.416666666667 |
+
+To make this sample align to calendar dates:
+
+\`\`\`questdb-sql title="Calendar alignment"
+SELECT ts, avg(quantity*price)
+FROM trades
+SAMPLE BY 1d
+ALIGN TO CALENDAR;
+\`\`\`
+
+| ts | avg |
+| --------------------------- | ------ |
+| 2021-05-31T00:00:00.000000Z | 1000.5 |
+| 2021-06-01T00:00:00.000000Z | 8007.2 |
+
+## Performance optimization
+
+For frequently executed \`SAMPLE BY\` queries, consider using [materialized views](/docs/guides/mat-views/) to pre-compute aggregates. This can significantly improve query performance, especially for complex sampling operations on large datasets.
+
+\`\`\`questdb-sql
+CREATE MATERIALIZED VIEW hourly_metrics AS
+SELECT
+ timestamp_floor('h', ts) as hour,
+ symbol,
+ avg(price) as avg_price,
+ sum(volume) as total_volume
+FROM trades
+SAMPLE BY 1h;
+\`\`\`
+
+## See also
+
+This section includes links to additional information such as tutorials:
+
+- [Materialized Views Guide](/docs/guides/mat-views/) - Pre-compute SAMPLE BY queries for better performance
+- [SQL Extensions for Time-Series Data in QuestDB](/blog/2022/11/23/sql-extensions-time-series-data-questdb-part-ii/)
+- [Three SQL Keywords for Finding Missing Data](/blog/three-sql-keywords-for-finding-missing-data/)
+`
+ },
+ {
+ path: "sql/select.md",
+ title: "SELECT keyword",
+ headers: ["Syntax", "Simple select", "Boolean expressions", "Aggregation", "Supported clauses", "Additional time-series clauses"],
+ content: `\`SELECT\` allows you to specify a list of columns and expressions to be selected
+and evaluated from a table.
+
+:::tip
+
+Looking for SELECT best practices? Checkout our
+[**Maximize your SQL efficiency: SELECT best practices**](/blog/2024/03/11/sql-select-statement-best-practices/)
+blog.
+
+:::
+
+## Syntax
+
+
+
+Note: \`table\` can either a specified table in your database or passed forward as
+the result of a sub-query.
+
+## Simple select
+
+### All columns
+
+QuestDB supports \`SELECT * FROM tablename\`. When selecting all, you can also
+omit most of the statement and pass the table name.
+
+The two examples below are equivalent
+
+\`\`\`questdb-sql title="QuestDB dialect"
+trades;
+\`\`\`
+
+\`\`\`questdb-sql title="Traditional SQL equivalent"
+SELECT * FROM trades;
+\`\`\`
+
+### Specific columns
+
+To select specific columns, replace \\* by the names of the columns you are
+interested in.
+
+Example:
+
+\`\`\`questdb-sql
+SELECT timestamp, symbol, side FROM trades;
+\`\`\`
+
+### Aliases
+
+Using aliases allow you to give expressions or column names of your choice. You
+can assign an alias to a column or an expression by writing the alias name you
+want after that expression.
+
+:::note
+
+Alias names and column names must be unique.
+
+:::
+
+\`\`\`questdb-sql
+SELECT timestamp, symbol,
+ price AS rate,
+ amount quantity
+FROM trades;
+\`\`\`
+
+Notice how you can use or omit the \`AS\` keyword.
+
+### Arithmetic expressions
+
+\`SELECT\` is capable of evaluating multiple expressions and functions. You can
+mix comma separated lists of expressions with the column names you are
+selecting.
+
+\`\`\`questdb-sql
+SELECT timestamp, symbol,
+ price * 0.25 AS price25pct,
+ amount > 10 AS over10
+FROM trades
+\`\`\`
+
+The result of \`amount > 10\` is a boolean. The column will be named "over10" and
+take values true or false.
+
+## Boolean expressions
+
+Supports \`AND\`/\`OR\`, \`NOT\` & \`XOR\`.
+
+### AND and OR
+
+AND returns true if both operands are true, and false otherwise.
+
+OR returns true if at least one of the operands is true.
+
+\`\`\`questdb-sql
+SELECT
+ (true AND false) AS this_will_return_false,
+ (true OR false) AS this_will_return_true;
+\`\`\`
+
+### NOT
+
+NOT inverts the truth value of the operand.
+
+\`\`\`questdb-sql
+SELECT
+ NOT (true AND false) AS this_will_return_true;
+\`\`\`
+
+### XOR
+
+^ is the bitwise XOR operator. It applies only to the Long data type.
+Depending on what you need, you might prefer to cast the input and
+output to boolean values.
+
+\`\`\`questdb-sql
+SELECT
+ (1 ^ 1) AS will_return_0,
+ (1 ^ 20) AS will_return_21,
+ (true::int ^ false::long)::boolean AS will_return_true,
+ (true::int ^ true::long)::boolean AS will_return_false;
+\`\`\`
+
+## Aggregation
+
+Supported aggregation functions are listed on the
+[aggregation reference](/docs/reference/function/aggregation/).
+
+### Aggregation by group
+
+QuestDB evaluates aggregation functions without need for traditional \`GROUP BY\`
+whenever there is a mix of column names and aggregation functions
+in a \`SELECT\` clause. You can have any number of discrete value columns and
+any number of aggregation functions. The three statements below are equivalent.
+
+\`\`\`questdb-sql title="QuestDB dialect"
+SELECT symbol, avg(price), count()
+FROM trades;
+\`\`\`
+
+\`\`\`questdb-sql title="Traditional SQL equivalent"
+SELECT symbol, avg(price), count()
+FROM trades
+GROUP BY Symbol;
+\`\`\`
+
+\`\`\`questdb-sql title="Traditional SQL equivalent with positional argument"
+SELECT symbol, avg(price), count()
+FROM trades
+GROUP BY 1;
+\`\`\`
+
+### Aggregation arithmetic
+
+Aggregation functions can be used in arithmetic expressions. The following
+computes \`mid\` of prices for every symbol.
+
+\`\`\`questdb-sql
+SELECT symbol, (min(price) + max(price))/2 mid, count() count
+FROM trades;
+\`\`\`
+
+:::tip
+
+Whenever possible, it is recommended to perform arithmetic \`outside\` of
+aggregation functions as this can have a dramatic impact on performance. For
+example, \`min(price/2)\` is going to execute considerably more slowly than
+\`min(price)/2\`, although both return the same result.
+
+:::
+
+## Supported clauses
+
+QuestDB supports the following standard SQL clauses within SELECT statements.
+
+### CASE
+
+Conditional results based on expressions.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[CASE reference](/docs/reference/function/conditional/)
+
+### CAST
+
+Convert values and expression between types.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[CAST reference](/docs/reference/sql/cast/)
+
+### DISTINCT
+
+Returns distinct values of the specified column(s).
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[DISTINCT reference](/docs/reference/sql/distinct/).
+
+### FILL
+
+Defines filling strategy for missing data in aggregation queries. This function
+complements [SAMPLE BY](/docs/reference/sql/sample-by/) queries.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[FILL reference](/docs/reference/sql/fill/).
+
+### JOIN
+
+Join tables based on a key or timestamp.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[JOIN reference](/docs/reference/sql/join/)
+
+### LIMIT
+
+Specify the number and position of records returned by a query.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[LIMIT reference](/docs/reference/sql/limit/).
+
+### ORDER BY
+
+Orders the results of a query by one or several columns.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[ORDER BY reference](/docs/reference/sql/order-by)
+
+### UNION, EXCEPT & INTERSECT
+
+Combine the results of two or more select statements. Can include or ignore
+duplicates.
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[UNION, EXCEPT & INTERSECT reference](/docs/reference/sql/union-except-intersect/)
+
+### WHERE
+
+Filters query results
+
+#### Syntax
+
+
+
+QuestDB supports complex WHERE clauses along with type-specific searches. For
+more information, please refer to the
+[WHERE reference](/docs/reference/sql/where/). There are different syntaxes for
+[text](/docs/reference/sql/where/#symbol-and-string),
+[numeric](/docs/reference/sql/where/#numeric), or
+[timestamp](/docs/reference/sql/where/#timestamp-and-date) filters.
+
+## Additional time-series clauses
+
+QuestDB augments SQL with the following clauses.
+
+### LATEST ON
+
+Retrieves the latest entry by timestamp for a given key or combination of keys
+This function requires a
+[designated timestamp](/docs/concept/designated-timestamp/).
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[LATEST ON reference](/docs/reference/sql/latest-on/).
+
+### SAMPLE BY
+
+Aggregates [time-series data](/blog/what-is-time-series-data/) into homogeneous time chunks. For example daily
+average, monthly maximum etc. This function requires a
+[designated timestamp](/docs/concept/designated-timestamp/).
+
+#### Syntax
+
+
+
+For more information, please refer to the
+[SAMPLE BY reference](/docs/reference/sql/sample-by/).
+
+### TIMESTAMP
+
+Dynamically creates a
+[designated timestamp](/docs/concept/designated-timestamp/) on the output of a
+query. This allows to perform timestamp operations like [SAMPLE BY](#sample-by)
+or [LATEST ON](#latest-on) on tables which originally do not have a designated
+timestamp.
+
+:::caution
+
+The output query must be ordered by time. \`TIMESTAMP()\` does not check for order
+and using timestamp functions on unordered data may produce unexpected results.
+
+:::
+
+#### Syntax
+
+
+
+For more information, refer to the
+[TIMESTAMP reference](/docs/reference/function/timestamp/)
+`
+ },
+ {
+ path: "sql/show.md",
+ title: "SHOW keyword",
+ headers: ["Syntax", "Description", "See also"],
+ content: `This keyword provides table, column, and partition information including
+metadata. The \`SHOW\` keyword is useful for checking the
+[designated timestamp setting](/docs/concept/designated-timestamp/) column, the
+[partition attachment settings](/docs/reference/sql/alter-table-attach-partition/),
+and partition storage size on disk.
+
+## Syntax
+
+
+
+## Description
+
+- \`SHOW TABLES\` returns all the tables.
+- \`SHOW COLUMNS\` returns all the columns and their metadata for the selected
+ table.
+- \`SHOW PARTITIONS\` returns the partition information for the selected table.
+- \`SHOW CREATE TABLE\` returns a DDL query that allows you to recreate the table.
+- \`SHOW USER\` shows user secret (enterprise-only)
+- \`SHOW GROUPS\` shows all groups the user belongs or all groups in the system
+ (enterprise-only)
+- \`SHOW USERS\` shows all users (enterprise-only)
+- \`SHOW SERVICE ACCOUNT\` displays details of a service account (enterprise-only)
+- \`SHOW SERVICE ACCOUNTS\` displays all service accounts or those assigned to the
+ user/group (enterprise-only)
+- \`SHOW PERMISSIONS\` displays permissions of user, group or service account
+ (enterprise-only)
+- \`SHOW SERVER_VERSION\` displays PostgreSQL compatibility version
+- \`SHOW PARAMETERS\` shows configuration keys and their matching \`env_var_name\`,
+ their values and the source of the value
+
+## Examples
+
+### SHOW TABLES
+
+\`\`\`questdb-sql title="show tables" demo
+SHOW TABLES;
+\`\`\`
+
+| table_name |
+| --------------- |
+| ethblocks_json |
+| trades |
+| weather |
+| AAPL_orderbook |
+| trips |
+
+### SHOW COLUMNS
+
+\`\`\`questdb-sql
+SHOW COLUMNS FROM my_table;
+\`\`\`
+
+| column | type | indexed | indexBlockCapacity | symbolCached | symbolCapacity | designated |
+| ------ | --------- | ------- | ------------------ | ------------ | -------------- | ---------- |
+| symb | SYMBOL | true | 1048576 | false | 256 | false |
+| price | DOUBLE | false | 0 | false | 0 | false |
+| ts | TIMESTAMP | false | 0 | false | 0 | true |
+| s | STRING | false | 0 | false | 0 | false |
+
+
+### SHOW CREATE TABLE
+
+\`\`\`questdb-sql title="retrieving table ddl" demo
+SHOW CREATE TABLE trades;
+\`\`\`
+
+| ddl |
+|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CREATE TABLE trades (symbol SYMBOL CAPACITY 256 CACHE, side SYMBOL CAPACITY 256 CACHE, price DOUBLE, amount DOUBLE, timestamp TIMESTAMP) timestamp(timestamp) PARTITION BY DAY WAL WITH maxUncommittedRows=500000, o3MaxLag=600000000us; |
+
+This is printed with formatting, so when pasted into a text editor that support formatting characters, you will see:
+
+\`\`\`questdb-sql
+CREATE TABLE trades (
+ symbol SYMBOL CAPACITY 256 CACHE,
+ side SYMBOL CAPACITY 256 CACHE,
+ price DOUBLE,
+ amount DOUBLE,
+ timestamp TIMESTAMP
+) timestamp(timestamp) PARTITION BY DAY WAL
+WITH maxUncommittedRows=500000, o3MaxLag=600000000us;
+\`\`\`
+
+#### Enterprise variant
+
+[QuestDB Enterprise](/enterprise/) will include an additional \`OWNED BY\` clause populated with the current user.
+
+For example,
+
+\`\`\`questdb-sql
+CREATE TABLE trades (
+ symbol SYMBOL CAPACITY 256 CACHE,
+ side SYMBOL CAPACITY 256 CACHE,
+ price DOUBLE,
+ amount DOUBLE,
+ timestamp TIMESTAMP
+) timestamp(timestamp) PARTITION BY DAY WAL
+WITH maxUncommittedRows=500000, o3MaxLag=600000000us
+OWNED BY 'admin';
+\`\`\`
+
+This clause assigns permissions for the table to that user.
+
+If permissions should be assigned to a different user,
+please modify this clause appropriately.
+
+### SHOW PARTITIONS
+
+\`\`\`questdb-sql
+SHOW PARTITIONS FROM my_table;
+\`\`\`
+
+| index | partitionBy | name | minTimestamp | maxTimestamp | numRows | diskSize | diskSizeHuman | readOnly | active | attached | detached | attachable |
+| ----- | ----------- | -------- | --------------------- | --------------------- | ------- | -------- | ------------- | -------- | ------ | -------- | -------- | ---------- |
+| 0 | WEEK | 2022-W52 | 2023-01-01 00:36:00.0 | 2023-01-01 23:24:00.0 | 39 | 98304 | 96.0 KiB | false | false | true | false | false |
+| 1 | WEEK | 2023-W01 | 2023-01-02 00:00:00.0 | 2023-01-08 23:24:00.0 | 280 | 98304 | 96.0 KiB | false | false | true | false | false |
+| 2 | WEEK | 2023-W02 | 2023-01-09 00:00:00.0 | 2023-01-15 23:24:00.0 | 280 | 98304 | 96.0 KiB | false | false | true | false | false |
+| 3 | WEEK | 2023-W03 | 2023-01-16 00:00:00.0 | 2023-01-18 12:00:00.0 | 101 | 83902464 | 80.0 MiB | false | true | true | false | false |
+
+### SHOW PARAMETERS
+
+\`\`\`questdb-sql
+SHOW PARAMETERS;
+\`\`\`
+
+The output demonstrates:
+
+- \`property_path\`: the configuration key
+- \`env_var_name\`: the matching env var for the key
+- \`value\`: the current value of the key
+- \`value_source\`: how the value is set (default, conf or env)
+
+| property_path | env_var_name | value | value_source |
+| ----------------------------------------- | --------------------------------------------- | ---------- | ------------ |
+| http.min.net.connection.rcvbuf | QDB_HTTP_MIN_NET_CONNECTION_RCVBUF | 1024 | default |
+| http.health.check.authentication.required | QDB_HTTP_HEALTH_CHECK_AUTHENTICATION_REQUIRED | true | default |
+| pg.select.cache.enabled | QDB_PG_SELECT_CACHE_ENABLED | true | conf |
+| cairo.sql.sort.key.max.pages | QDB_CAIRO_SQL_SORT_KEY_MAX_PAGES | 2147483647 | env |
+
+You can optionally chain \`SHOW PARAMETERS\` with other clauses:
+
+\`\`\`questdb-sql
+-- This query will return all parameters where the value contains 'C:'
+SHOW PARAMETERS WHERE value ILIKE '%C:%';
+
+-- This query will return all parameters where the property_path is not 'cairo.root' or 'cairo.sql.backup.root', ordered by the first column
+SHOW PARAMETERS WHERE property_path NOT IN ('cairo.root', 'cairo.sql.backup.root') ORDER BY 1;
+
+-- This query will return all parameters where the value_source is 'env'
+SHOW PARAMETERS WHERE value_source = 'env';
+\`\`\`
+
+### SHOW USER
+
+\`\`\`questdb-sql
+SHOW USER; --as john
+\`\`\`
+
+or
+
+\`\`\`questdb-sql
+SHOW USER john;
+\`\`\`
+
+| auth_type | enabled |
+| ---------- | ------- |
+| Password | false |
+| JWK Token | false |
+| REST Token | false |
+
+### SHOW USERS
+
+\`\`\`questdb-sql
+SHOW USERS;
+\`\`\`
+
+| name |
+| ----- |
+| admin |
+| john |
+
+### SHOW GROUPS
+
+\`\`\`questdb-sql
+SHOW GROUPS;
+\`\`\`
+
+or
+
+\`\`\`questdb-sql
+SHOW GROUPS john;
+\`\`\`
+
+| name |
+| ---------- |
+| management |
+
+### SHOW SERVICE ACCOUNT
+
+\`\`\`questdb-sql
+SHOW SERVICE ACCOUNT;
+\`\`\`
+
+or
+
+\`\`\`questdb-sql
+SHOW SERVICE ACCOUNT ilp_ingestion;
+\`\`\`
+
+| auth_type | enabled |
+| ---------- | ------- |
+| Password | false |
+| JWK Token | false |
+| REST Token | false |
+
+### SHOW SERVICE ACCOUNTS
+
+\`\`\`questdb-sql
+SHOW SERVICE ACCOUNTS;
+\`\`\`
+
+| name |
+| ---------- |
+| management |
+| svc1_admin |
+
+\`\`\`questdb-sql
+SHOW SERVICE ACCOUNTS john;
+\`\`\`
+
+| name |
+| ---------- |
+| svc1_admin |
+
+\`\`\`questdb-sql
+SHOW SERVICE ACCOUNTS admin_group;
+\`\`\`
+
+| name |
+| ---------- |
+| svc1_admin |
+
+### SHOW PERMISSIONS FOR CURRENT USER
+
+\`\`\`questdb-sql
+SHOW PERMISSIONS;
+\`\`\`
+
+| permission | table_name | column_name | grant_option | origin |
+| ---------- | ---------- | ----------- | ------------ | ------ |
+| SELECT | | | t | G |
+
+### SHOW PERMISSIONS user
+
+\`\`\`questdb-sql
+SHOW PERMISSIONS admin;
+\`\`\`
+
+| permission | table_name | column_name | grant_option | origin |
+| ---------- | ---------- | ----------- | ------------ | ------ |
+| SELECT | | | t | G |
+| INSERT | orders | | f | G |
+| UPDATE | order_itme | quantity | f | G |
+
+### SHOW PERMISSIONS
+
+#### For a group
+
+\`\`\`questdb-sql
+SHOW PERMISSIONS admin_group;
+\`\`\`
+
+| permission | table_name | column_name | grant_option | origin |
+| ---------- | ---------- | ----------- | ------------ | ------ |
+| INSERT | orders | | f | G |
+
+#### For a service account
+
+\`\`\`questdb-sql
+SHOW PERMISSIONS ilp_ingestion;
+\`\`\`
+
+| permission | table_name | column_name | grant_option | origin |
+| ---------- | ---------- | ----------- | ------------ | ------ |
+| SELECT | | | t | G |
+| INSERT | | | f | G |
+| UPDATE | | | f | G |
+
+### SHOW SERVER_VERSION
+
+Shows PostgreSQL compatibility version.
+
+\`\`\`questdb-sql
+SHOW SERVER_VERSION;
+\`\`\`
+
+| server_version |
+| -------------- |
+| 12.3 (questdb) |
+
+## See also
+
+The following functions allow querying tables with filters and using the results
+as part of a function:
+
+- [table_columns()](/docs/reference/function/meta/#table_columns)
+- [tables()](/docs/reference/function/meta/#tables)
+- [table_partitions()](/docs/reference/function/meta/#table_partitions)
+`
+ },
+ {
+ path: "sql/snapshot.md",
+ title: "SNAPSHOT keyword",
+ headers: ["Syntax"],
+ content: `This is a *deprecated* syntax to prepare the database for a full backup or a filesystem (disk) snapshot.
+\`SNAPSHOT\` SQL syntax has been superceded by [\`CHECKPOINT\` SQL syntax](/docs/reference/sql/checkpoint/)
+
+_For a detailed guide backup creation and restoration? Check out our
+[Backup and Restore](/docs/operations/backup/) guide!_
+
+## Syntax
+
+
+
+`
+ },
+ {
+ path: "sql/truncate.md",
+ title: "TRUNCATE TABLE keyword",
+ headers: ["Syntax", "Notes", "See also"],
+ content: `\`TRUNCATE TABLE\` permanently deletes the contents of a table without deleting
+the table itself.
+
+## Syntax
+
+
+
+## Notes
+
+This command irremediably deletes the data in the target table. In doubt, make
+sure you have created [backups](/docs/operations/backup/) of your data.
+
+## Examples
+
+\`\`\`questdb-sql
+TRUNCATE TABLE ratings;
+\`\`\`
+
+## See also
+
+To delete both the data and the table structure, use
+[DROP](/docs/reference/sql/drop/).
+`
+ },
+ {
+ path: "sql/union-except-intersect.md",
+ title: "UNION EXCEPT INTERSECT keywords",
+ headers: ["Syntax", "Keyword execution priority", "Clauses", "Alias"],
+ content: `## Overview
+
+\`UNION\`, \`EXCEPT\`, and \`INTERSECT\` perform set operations.
+
+\`UNION\` is used to combine the results of two or more queries.
+
+\`EXCEPT\` and \`INTERSECT\` return distinct rows by comparing the results of two
+queries.
+
+To work properly, all of the following must be true:
+
+- Each query statement should return the same number of column.
+- Each column to be combined should have data types that are either the same, or
+ supported by \`implicit cast\`. For example, IPv4 columns can be combined with VARCHAR/STRING
+ columns as they will be automatically cast. See [CAST](/docs/reference/sql/cast/) for more
+ information.
+ - Example:
+ \`\`\`questdb-sql
+ select '1'::varchar as col from long_sequence(1)
+ union all
+ select '127.0.0.1'::ipv4 from long_sequence(1);
+ \`\`\`
+
+- Columns in each query statement should be in the same order.
+
+## Syntax
+
+### UNION
+
+
+
+- \`UNION\` returns distinct results.
+- \`UNION ALL\` returns all \`UNION\` results including duplicates.
+- \`EXCEPT\` returns distinct rows from the left input query that are not returned
+ by the right input query.
+- \`EXCEPT ALL\` returns all \`EXCEPT\` results including duplicates.
+- \`INTERSECT\` returns distinct rows that are returned by both input queries.
+- \`INTERSECT ALL\` returns all \`INTERSECT\` results including duplicates.
+
+## Examples
+
+The examples for the set operations use the following tables:
+
+sensor_1:
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | New York |
+| 2 | United Automation | Miami |
+| 3 | Omron | Miami |
+| 4 | Honeywell | San Francisco |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+| 1 | Honeywell | New York |
+
+Notice that the last row in the sensor_1 table is a duplicate.
+
+sensor_2:
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | San Francisco |
+| 2 | United Automation | Boston |
+| 3 | Eberle | New York |
+| 4 | Honeywell | Boston |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+
+### UNION
+
+\`\`\`questdb-sql
+sensor_1 UNION sensor_2;
+\`\`\`
+
+returns
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | New York |
+| 2 | United Automation | Miami |
+| 3 | Omron | Miami |
+| 4 | Honeywell | San Francisco |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+| 1 | Honeywell | San Francisco |
+| 2 | United Automation | Boston |
+| 3 | Eberle | New York |
+| 4 | Honeywell | Boston |
+
+\`UNION\` eliminates duplication even when one of the queries returns nothing.
+
+For instance:
+
+\`\`\`questdb-sql
+sensor_1
+UNION
+sensor_2 WHERE ID > 10;
+\`\`\`
+
+returns:
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | New York |
+| 2 | United Automation | Miami |
+| 3 | Omron | Miami |
+| 4 | Honeywell | San Francisco |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+
+The duplicate row in \`sensor_1\` is not returned as a result.
+
+\`\`\`questdb-sql
+sensor_1 UNION ALL sensor_2;
+\`\`\`
+
+returns
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | New York |
+| 2 | United Automation | Miami |
+| 3 | Omron | Miami |
+| 4 | Honeywell | San Francisco |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+| 1 | Honeywell | San Francisco |
+| 2 | United Automation | Boston |
+| 3 | Eberle | New York |
+| 4 | Honeywell | Boston |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+
+### EXCEPT
+
+\`\`\`questdb-sql
+sensor_1 EXCEPT sensor_2;
+\`\`\`
+
+returns
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | New York |
+| 2 | United Automation | Miami |
+| 3 | Omron | Miami |
+| 4 | Honeywell | San Francisco |
+
+Notice that \`EXCEPT\` eliminates duplicates. Let's run \`EXCEPT ALL\` to change
+that.
+
+\`\`\`questdb-sql
+sensor_1 EXCEPT ALL sensor_2;
+\`\`\`
+
+| ID | make | city |
+| --- | ----------------- | ------------- |
+| 1 | Honeywell | New York |
+| 2 | United Automation | Miami |
+| 3 | Omron | Miami |
+| 4 | Honeywell | San Francisco |
+| 1 | Honeywell | New York |
+
+### INTERSECT
+
+\`\`\`questdb-sql
+sensor_1 INTERSECT sensor_2;
+\`\`\`
+
+returns
+
+| ID | make | city |
+| --- | ------ | ------ |
+| 5 | Omron | Boston |
+| 6 | RS Pro | Boston |
+
+In this example we have no duplicates, but if there were any, we could use
+\`INTERSECT ALL\` to have them.
+
+## Keyword execution priority
+
+The QuestDB's engine processes the keywords from left to right, unless the
+priority is defined by parenthesis.
+
+For example:
+
+\`\`\`questdb-sql
+query_1 UNION query_2 EXCEPT query_3;
+\`\`\`
+
+is executed as:
+
+\`\`\`questdb-sql
+(query_1 UNION query_2) EXCEPT query_3;
+\`\`\`
+
+Similarly, the following syntax:
+
+\`\`\`questdb-sql
+query_1 UNION query_2 INTERSECT query_3;
+\`\`\`
+
+is executed as:
+
+\`\`\`questdb-sql
+(query_1 UNION query_2) INTERSECT query_3;
+\`\`\`
+
+## Clauses
+
+The set operations can be used with clauses such as \`LIMIT\`, \`ORDER BY\`, and
+\`WHERE\`. However, when the clause keywords are added after the set operations,
+the execution order for different clauses varies.
+
+For \`LIMIT\` and \`ORDER BY\`, the clauses are applied after the set operations.
+
+For example:
+
+\`\`\`questdb-sql
+query_1 UNION query_2
+LIMIT 3;
+\`\`\`
+
+is executed as:
+
+\`\`\`questdb-sql
+(query_1 UNION query_2)
+LIMIT 3;
+\`\`\`
+
+For \`WHERE\`, the clause is applied first to the query immediate prior to it.
+
+\`\`\`questdb-sql
+query_1 UNION query_2
+WHERE value = 1;
+\`\`\`
+
+is executed as:
+
+\`\`\`questdb-sql
+query_1 UNION (query_2 WHERE value = 1);
+\`\`\`
+
+:::note
+
+- QuestDB applies \`GROUP BY\` implicitly. See
+ [GROUP BY reference](/docs/reference/sql/group-by/) for more information.
+- Quest does not support the clause \`HAVING\` yet.
+
+:::
+
+## Alias
+
+When different aliases are used with set operations, the execution follows a
+left-right order and the output uses the first alias.
+
+For example:
+
+\`\`\`questdb-sql
+SELECT alias_1 FROM table_1
+UNION
+SELECT alias_2 FROM table_2;
+\`\`\`
+
+The output shows \`alias_1\`.
+
+`
+ },
+ {
+ path: "sql/update.md",
+ title: "UPDATE keyword",
+ headers: ["Syntax"],
+ content: `Updates data in a database table.
+
+## Syntax
+
+
+
+:::note
+
+- the same \`columnName\` cannot be specified multiple times after the SET keyword
+ as it would be ambiguous
+- the designated timestamp column cannot be updated as it would lead to altering
+ history of the [time-series data](/blog/what-is-time-series-data/)
+- If the target partition is
+ [attached by a symbolic link](/docs/reference/sql/alter-table-attach-partition/#symbolic-links),
+ the partition is read-only. \`UPDATE\` operation on a read-only partition will
+ fail and generate an error.
+
+:::
+
+## Examples
+
+\`\`\`questdb-sql title="Update with constant"
+UPDATE trades SET price = 125.34 WHERE symbol = 'AAPL';
+\`\`\`
+
+\`\`\`questdb-sql title="Update with function"
+UPDATE book SET mid = (bid + ask)/2 WHERE symbol = 'AAPL';
+\`\`\`
+
+\`\`\`questdb-sql title="Update with subquery"
+UPDATE spreads s SET s.spread = p.ask - p.bid FROM prices p WHERE s.symbol = p.symbol;
+\`\`\`
+
+\`\`\`questdb-sql title="Update with multiple joins"
+WITH up AS (
+ SELECT p.ask - p.bid AS spread, p.timestamp
+ FROM prices p
+ JOIN instruments i ON p.symbol = i.symbol
+ WHERE i.type = 'BOND'
+)
+UPDATE spreads s
+SET spread = up.spread
+FROM up
+WHERE s.timestamp = up.timestamp;
+\`\`\`
+
+\`\`\`questdb-sql title="Update with a sub-query"
+WITH up AS (
+ SELECT symbol, spread, ts
+ FROM temp_spreads
+ WHERE timestamp between '2022-01-02' and '2022-01-03'
+)
+UPDATE spreads s
+SET spread = up.spread
+FROM up
+WHERE up.ts = s.ts AND s.symbol = up.symbol;
+\`\`\`
+`
+ },
+ {
+ path: "sql/vacuum-table.md",
+ title: "VACUUM TABLE",
+ headers: ["Syntax", "Description"],
+ content: `\`VACUUM TABLE\` reclaims storage by scanning file systems and deleting duplicate
+directories and files.
+
+## Syntax
+
+
+
+## Description
+
+This command provides a manual mechanism to reclaim the disk space. The
+implementation scans file system to detect duplicate directories and files.
+Frequent usage of the command can be relatively expensive. Thus, \`VACUUM TABLE\`
+has to be executed sparingly.
+
+When a table is appended in an out-of-order manner, the \`VACUUM TABLE\` command
+writes a new partition version to the disk. The old partition version directory
+is deleted once it is not read by \`SELECT\` queries. In the event of file system
+errors, physical deletion of old files may be interrupted and an outdated
+partition version may be left behind consuming the disk space.
+
+When an \`UPDATE\` SQL statement is run, it copies column files of the selected
+table. The old column files are automatically deleted but in certain
+circumstances, they can be left behind. In this case, \`VACUUM TABLE\` can be used
+to re-trigger the deletion process of the old column files.
+
+The \`VACUUM TABLE\` command starts a new scan over table partition directories
+and column files. It detects redundant, unused files consuming the disk space
+and deletes them. \`VACUUM TABLE\` executes asynchronously, i.e. it may keep
+scanning and deleting files after their response is returned to the SQL client.
+
+## Example
+
+\`\`\`questdb-sql
+VACUUM TABLE trades;
+\`\`\`
+`
+ },
+ {
+ path: "sql/where.md",
+ title: "WHERE keyword",
+ headers: ["Syntax", "Symbol and string", "Numeric", "Boolean", "Timestamp and date"],
+ content: `\`WHERE\` clause filters data. Filter expressions are required to return boolean
+result.
+
+QuestDB includes a [JIT compiler](/docs/concept/jit-compiler/) for SQL queries
+which contain \`WHERE\` clauses.
+
+## Syntax
+
+The general syntax is as follows. Specific filters have distinct syntaxes
+detailed thereafter.
+
+
+
+### Logical operators
+
+QuestDB supports \`AND\`, \`OR\`, \`NOT\` as logical operators and can assemble
+conditions using brackets \`()\`.
+
+
+
+\`\`\`questdb-sql title="Example"
+SELECT * FROM table
+WHERE
+a = 1 AND (b = 2 OR c = 3 AND NOT d);
+\`\`\`
+
+## Symbol and string
+
+QuestDB can filter strings and symbols based on equality, inequality, and
+regular expression patterns.
+
+### Exact match
+
+Evaluates match of a string or symbol.
+
+
+
+\`\`\`questdb-sql title="Example"
+SELECT * FROM users
+WHERE name = 'John';
+\`\`\`
+
+| name | age |
+| ---- | --- |
+| John | 31 |
+| John | 45 |
+| ... | ... |
+
+### Does NOT match
+
+Evaluates mismatch of a string or symbol.
+
+
+
+\`\`\`questdb-sql title="Example"
+SELECT * FROM users
+WHERE name != 'John';
+\`\`\`
+
+| name | age |
+| ---- | --- |
+| Tim | 31 |
+| Tom | 45 |
+| ... | ... |
+
+### Regular expression match
+
+Evaluates match against a regular expression defined using
+[java.util.regex](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/regex/Pattern.html)
+patterns.
+
+
+
+\`\`\`questdb-sql title="Regex example"
+SELECT * FROM users WHERE name ~ 'Jo';
+\`\`\`
+
+| name | age |
+| -------- | --- |
+| Joe | 31 |
+| Jonathan | 45 |
+| ... | ... |
+
+### Regular expression does NOT match
+
+Evaluates mismatch against a regular expression defined using
+[java.util.regex](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/regex/Pattern.html)
+patterns.
+
+
+
+\`\`\`questdb-sql title="Example"
+SELECT * FROM users WHERE name !~ 'Jo';
+\`\`\`
+
+| name | age |
+| ---- | --- |
+| Tim | 31 |
+| Tom | 45 |
+| ... | ... |
+
+### List search
+
+Evaluates match or mismatch against a list of elements.
+
+
+
+\`\`\`questdb-sql title="List match"
+SELECT * FROM users WHERE name in('Tim', 'Tom');
+\`\`\`
+
+| name | age |
+| ---- | --- |
+| Tim | 31 |
+| Tom | 45 |
+| ... | ... |
+
+\`\`\`questdb-sql title="List mismatch"
+SELECT * FROM users WHERE NOT name in('Tim', 'Tom');
+\`\`\`
+
+| name | age |
+| ------ | --- |
+| Aaron | 31 |
+| Amelie | 45 |
+| ... | ... |
+
+## Numeric
+
+QuestDB can filter numeric values based on equality, inequality, comparison, and
+proximity.
+
+:::note
+
+For timestamp filters, we recommend the
+[timestamp search notation](#timestamp-and-date) which is faster and less
+verbose.
+
+:::
+
+### Equality, inequality and comparison
+
+
+
+\`\`\`questdb-sql title="Superior or equal to 23"
+SELECT * FROM users WHERE age >= 23;
+\`\`\`
+
+\`\`\`questdb-sql title="Equal to 23"
+SELECT * FROM users WHERE age = 23;
+\`\`\`
+
+\`\`\`questdb-sql title="NOT Equal to 23"
+SELECT * FROM users WHERE age != 23;
+\`\`\`
+
+
+
+## Boolean
+
+
+
+Using the columnName will return \`true\` values. To return \`false\` values,
+precede the column name with the \`NOT\` operator.
+
+\`\`\`questdb-sql title="Example - true"
+SELECT * FROM users WHERE isActive;
+\`\`\`
+
+| userId | isActive |
+| ------ | -------- |
+| 12532 | true |
+| 38572 | true |
+| ... | ... |
+
+\`\`\`questdb-sql title="Example - false"
+SELECT * FROM users WHERE NOT isActive;
+\`\`\`
+
+| userId | isActive |
+| ------ | -------- |
+| 876534 | false |
+| 43234 | false |
+| ... | ... |
+
+## Timestamp and date
+
+QuestDB supports both its own timestamp search notation and standard search
+based on inequality. This section describes the use of the **timestamp search
+notation** which is efficient and fast but requires a
+[designated timestamp](/docs/concept/designated-timestamp/).
+
+If a table does not have a designated timestamp applied during table creation,
+one may be applied dynamically
+[during a select operation](/docs/reference/function/timestamp/#during-a-select-operation).
+
+### Native timestamp format
+
+QuestDB automatically recognizes strings formatted as ISO timestamp as a
+\`timestamp\` type. The following are valid examples of strings parsed as
+\`timestamp\` types:
+
+| Valid STRING Format | Resulting Timestamp |
+| -------------------------------- | --------------------------- |
+| 2010-01-12T12:35:26.123456+01:30 | 2010-01-12T11:05:26.123456Z |
+| 2010-01-12T12:35:26.123456+01 | 2010-01-12T11:35:26.123456Z |
+| 2010-01-12T12:35:26.123456Z | 2010-01-12T12:35:26.123456Z |
+| 2010-01-12T12:35:26.12345 | 2010-01-12T12:35:26.123450Z |
+| 2010-01-12T12:35:26.1234 | 2010-01-12T12:35:26.123400Z |
+| 2010-01-12T12:35:26.123 | 2010-01-12T12:35:26.123000Z |
+| 2010-01-12T12:35:26.12 | 2010-01-12T12:35:26.120000Z |
+| 2010-01-12T12:35:26.1 | 2010-01-12T12:35:26.100000Z |
+| 2010-01-12T12:35:26 | 2010-01-12T12:35:26.000000Z |
+| 2010-01-12T12:35 | 2010-01-12T12:35:00.000000Z |
+| 2010-01-12T12 | 2010-01-12T12:00:00.000000Z |
+| 2010-01-12 | 2010-01-12T00:00:00.000000Z |
+| 2010-01 | 2010-01-01T00:00:00.000000Z |
+| 2010 | 2010-01-01T00:00:00.000000Z |
+| 2010-01-12 12:35:26.123456-02:00 | 2010-01-12T14:35:26.123456Z |
+| 2010-01-12 12:35:26.123456Z | 2010-01-12T12:35:26.123456Z |
+| 2010-01-12 12:35:26.123 | 2010-01-12T12:35:26.123000Z |
+| 2010-01-12 12:35:26.12 | 2010-01-12T12:35:26.120000Z |
+| 2010-01-12 12:35:26.1 | 2010-01-12T12:35:26.100000Z |
+| 2010-01-12 12:35:26 | 2010-01-12T12:35:26.000000Z |
+| 2010-01-12 12:35 | 2010-01-12T12:35:00.000000Z |
+
+### Exact timestamp
+
+#### Syntax
+
+
+
+\`\`\`questdb-sql title="Timestamp equals date"
+SELECT scores WHERE ts = '2010-01-12T00:02:26.000Z';
+\`\`\`
+
+| ts | score |
+| ------------------------ | ----- |
+| 2010-01-12T00:02:26.000Z | 2.4 |
+| 2010-01-12T00:02:26.000Z | 3.1 |
+| ... | ... |
+
+\`\`\`questdb-sql title="Timestamp equals timestamp"
+SELECT scores WHERE ts = '2010-01-12T00:02:26.000000Z';
+\`\`\`
+
+| ts | score |
+| --------------------------- | ----- |
+| 2010-01-12T00:02:26.000000Z | 2.4 |
+| 2010-01-12T00:02:26.000000Z | 3.1 |
+| ... | ... |
+
+### Time range (WHERE IN)
+
+Returns results within a defined range.
+
+#### Syntax
+
+
+
+\`\`\`questdb-sql title="Results in a given year"
+SELECT * FROM scores WHERE ts IN '2018';
+\`\`\`
+
+| ts | score |
+| --------------------------- | ----- |
+| 2018-01-01T00:0000.000000Z | 123.4 |
+| ... | ... |
+| 2018-12-31T23:59:59.999999Z | 115.8 |
+
+\`\`\`questdb-sql title="Results in a given minute"
+SELECT * FROM scores WHERE ts IN '2018-05-23T12:15';
+\`\`\`
+
+| ts | score |
+| --------------------------- | ----- |
+| 2018-05-23T12:15:00.000000Z | 123.4 |
+| ... | ... |
+| 2018-05-23T12:15:59.999999Z | 115.8 |
+
+### Time range with interval modifier
+
+You can apply a modifier to further customize the range. The modifier extends
+the upper bound of the original timestamp based on the modifier parameter. An
+optional interval with occurrence can be set, to apply the search in the given
+time range repeatedly, for a set number of times.
+
+#### Syntax
+
+
+
+- \`timestamp\` is the original time range for the query.
+- \`modifier\` is a signed integer modifying the upper bound applying to the
+ \`timestamp\`:
+
+ - A \`positive\` value extends the selected period.
+ - A \`negative\` value reduces the selected period.
+
+- \`interval\` is an unsigned integer indicating the desired interval period for
+ the time range.
+- \`repetition\` is an unsigned integer indicating the number of times the
+ interval should be applied.
+
+#### Examples
+
+Modifying the range:
+
+\`\`\`questdb-sql title="Results in a given year and the first month of the next year"
+SELECT * FROM scores WHERE ts IN '2018;1M';
+\`\`\`
+
+The range is 2018. The modifier extends the upper bound (originally 31 Dec 2018)
+by one month.
+
+| ts | score |
+| --------------------------- | ----- |
+| 2018-01-01T00:00:00.000000Z | 123.4 |
+| ... | ... |
+| 2019-01-31T23:59:59.999999Z | 115.8 |
+
+\`\`\`questdb-sql title="Results in a given month excluding the last 3 days"
+SELECT * FROM scores WHERE ts IN '2018-01;-3d';
+\`\`\`
+
+The range is Jan 2018. The modifier reduces the upper bound (originally 31
+Jan 2018) by 3 days.
+
+| ts | score |
+| --------------------------- | ----- |
+| 2018-01-01T00:00:00.000000Z | 123.4 |
+| ... | ... |
+| 2018-01-28T23:59:59.999999Z | 113.8 |
+
+Modifying the interval:
+
+\`\`\`questdb-sql title="Results on a given date with an interval"
+SELECT * FROM scores WHERE ts IN '2018-01-01;1d;1y;2';
+
+\`\`\`
+
+The range is extended by one day from Jan 1 2018, with a one-year interval,
+repeated twice. This means that the query searches for results on Jan 1-2 in
+2018 and in 2019:
+
+| ts | score |
+| --------------------------- | ----- |
+| 2018-01-01T00:00:00.000000Z | 123.4 |
+| ... | ... |
+| 2018-01-02T23:59:59.999999Z | 110.3 |
+| 2019-01-01T00:00:00.000000Z | 128.7 |
+| ... | ... |
+| 2019-01-02T23:59:59.999999Z | 103.8 |
+
+A more complete query breakdown would appear as such:
+
+\`\`\`questdb-sql
+-- IN extension for time-intervals
+
+SELECT * FROM trades WHERE timestamp in '2023'; -- whole year
+SELECT * FROM trades WHERE timestamp in '2023-12'; -- whole month
+SELECT * FROM trades WHERE timestamp in '2023-12-20'; -- whole day
+
+-- The whole day, extending 15s into the next day
+SELECT * FROM trades WHERE timestamp in '2023-12-20;15s';
+
+-- For the past 7 days, 2 seconds before and after midnight
+SELECT * from trades WHERE timestamp in '2023-09-20T23:59:58;4s;-1d;7'
+\`\`\`
+
+### IN with multiple arguments
+
+#### Syntax
+
+\`IN\` with more than 1 argument is treated as standard SQL \`IN\`. It is a
+shorthand of multiple \`OR\` conditions, i.e. the following query:
+
+\`\`\`questdb-sql title="IN list"
+SELECT * FROM scores
+WHERE ts IN ('2018-01-01', '2018-01-01T12:00', '2018-01-02');
+\`\`\`
+
+is equivalent to:
+
+\`\`\`questdb-sql title="IN list equivalent OR"
+SELECT * FROM scores
+WHERE ts = '2018-01-01' or ts = '2018-01-01T12:00' or ts = '2018-01-02';
+\`\`\`
+
+| ts | value |
+| --------------------------- | ----- |
+| 2018-01-01T00:00:00.000000Z | 123.4 |
+| 2018-01-01T12:00:00.000000Z | 589.1 |
+| 2018-01-02T00:00:00.000000Z | 131.5 |
+
+### BETWEEN
+
+#### Syntax
+
+For non-standard ranges, users can explicitly specify the target range using the
+\`BETWEEN\` operator. As with standard SQL, both upper and lower bounds of
+\`BETWEEN\` are inclusive, and the order of lower and upper bounds is not
+important so that \`BETWEEN X AND Y\` is equivalent to \`BETWEEN Y AND X\`.
+
+\`\`\`questdb-sql title="Explicit range"
+SELECT * FROM scores
+WHERE ts BETWEEN '2018-01-01T00:00:23.000000Z' AND '2018-01-01T00:00:23.500000Z';
+\`\`\`
+
+| ts | value |
+| --------------------------- | ----- |
+| 2018-01-01T00:00:23.000000Z | 123.4 |
+| ... | ... |
+| 2018-01-01T00:00:23.500000Z | 131.5 |
+
+\`BETWEEN\` can accept non-constant bounds, for example, the following query will
+return all records older than one year before the current date:
+
+\`\`\`questdb-sql title="One year before current date"
+SELECT * FROM scores
+WHERE ts BETWEEN to_str(now(), 'yyyy-MM-dd')
+AND dateadd('y', -1, to_str(now(), 'yyyy-MM-dd'));
+\`\`\`
+
+##### Inclusivity example
+
+Inclusivity is precise, and may be more granular than the provided dates appear.
+
+If a timestamp in the format YYYY-MM-DD is passed forward, it is computed as YYYY-MM-DDThh:mm:ss.sss.
+
+To demonstrate, note the behaviour of the following example queries:
+
+\`\`\`questdb-sql title="Demonstrating inclusivity"
+SELECT *
+FROM trades
+WHERE timestamp BETWEEN '2024-04-01' AND '2024-04-03'
+LIMIT -1;
+\`\`\`
+
+| symbol | side | price | amount | timestamp |
+|--------|------|-----------|----------|-----------------------------|
+| BTC-USD| sell | 65,464.14 | 0.05100764 | 2024-04-02T23:59:59.9947212 |
+
+The query pushes to the boundaries as far as is possible, all the way to: \`2024-04-02T23:59:59.9947212\`.
+
+If there was an event at precisely \`2024-04-03T00:00:00.00000\`, it would also be included.
+
+Now let us look at:
+
+\`\`\`title="Demonstrating inclusivity"
+SELECT *
+FROM trades
+WHERE timestamp BETWEEN '2024-04-01' AND '2024-04-03T00:00:00.99'
+LIMIT -1;
+\`\`\`
+
+| symbol | side | price | amount | timestamp |
+|---------|------|----------|------------|----------------------------------|
+| ETH-USD | sell | 3,279.11 | 0.00881686 | 2024-04-03T00:00:00.988858Z |
+
+Even with fractional seconds, the boundary is inclusive.
+
+A row with timestamp 2024-04-03T00:00:00.990000Z would also return in boundary.
+`
+ },
+ {
+ path: "sql/with.md",
+ title: "WITH keyword",
+ headers: ["Syntax"],
+ content: `Supports Common Table Expressions (CTEs), e.i., naming one or several
+sub-queries to be used with a [\`SELECT\`](/docs/reference/sql/select/),
+[\`INSERT\`](/docs/reference/sql/insert/), or
+[\`UPDATE\`](/docs/reference/sql/update/) query.
+
+Using a CTE makes it easy to simplify large or complex statements which involve
+sub-queries, particularly when such sub-queries are used several times.
+
+## Syntax
+
+
+
+Where:
+
+- \`alias\` is the name given to the sub-query for ease of reusing
+- \`subQuery\` is a SQL query (e.g \`SELECT * FROM table\`)
+
+## Examples
+
+\`\`\`questdb-sql title="Single alias"
+WITH first_10_users AS (SELECT * FROM users limit 10)
+SELECT user_name FROM first_10_users;
+\`\`\`
+
+\`\`\`questdb-sql title="Using recursively"
+WITH first_10_users AS (SELECT * FROM users limit 10),
+first_5_users AS (SELECT * FROM first_10_users limit 5)
+SELECT user_name FROM first_5_users;
+\`\`\`
+
+\`\`\`questdb-sql title="Flag whether individual trips are longer or shorter than average"
+WITH avg_distance AS (SELECT avg(trip_distance) average FROM trips)
+SELECT pickup_datetime, trips.trip_distance > avg_distance.average longer_than_average
+FROM trips CROSS JOIN avg_distance;
+\`\`\`
+
+\`\`\`questdb-sql title="Update with a sub-query"
+WITH up AS (
+ SELECT symbol, spread, ts
+ FROM temp_spreads
+ WHERE timestamp between '2022-01-02' and '2022-01-03'
+)
+UPDATE spreads s
+SET spread = up.spread
+FROM up
+WHERE up.ts = s.ts AND s.symbol = up.symbol;
+\`\`\`
+
+\`\`\`questdb-sql title="Insert with a sub-query"
+WITH up AS (
+ SELECT symbol, spread, ts
+ FROM temp_spreads
+ WHERE timestamp between '2022-01-02' and '2022-01-03'
+)
+INSERT INTO spreads
+SELECT * FROM up;
+\`\`\`
+`
+ }
+]
diff --git a/packages/web-console/src/utils/questdb-docs-data/toc-list.ts b/packages/web-console/src/utils/questdb-docs-data/toc-list.ts
new file mode 100644
index 000000000..db6990a32
--- /dev/null
+++ b/packages/web-console/src/utils/questdb-docs-data/toc-list.ts
@@ -0,0 +1,587 @@
+// Auto-generated table of contents
+// Generated on 2025-09-18T21:16:04.094Z
+
+export const questdbTocList = {
+ "functions": [
+ "Aggregate functions",
+ "Aggregate functions - approx_count_distinct",
+ "Aggregate functions - approx_median",
+ "Aggregate functions - approx_percentile",
+ "Aggregate functions - avg",
+ "Aggregate functions - corr",
+ "Aggregate functions - count",
+ "Aggregate functions - count_distinct",
+ "Aggregate functions - covar_pop",
+ "Aggregate functions - covar_samp",
+ "Aggregate functions - first/last",
+ "Aggregate functions - first_not_null",
+ "Aggregate functions - haversine_dist_deg",
+ "Aggregate functions - ksum",
+ "Aggregate functions - last_not_null",
+ "Aggregate functions - max",
+ "Aggregate functions - min",
+ "Aggregate functions - nsum",
+ "Aggregate functions - stddev / stddev_samp",
+ "Aggregate functions - stddev_pop",
+ "Aggregate functions - string_agg",
+ "Aggregate functions - string_distinct_agg",
+ "Aggregate functions - sum",
+ "Aggregate functions - var_pop",
+ "Aggregate functions - variance / var_samp",
+ "Array functions",
+ "Array functions - array_avg",
+ "Array functions - array_count",
+ "Array functions - array_cum_sum",
+ "Array functions - array_max",
+ "Array functions - array_min",
+ "Array functions - array_position",
+ "Array functions - array_stddev",
+ "Array functions - array_stddev_pop",
+ "Array functions - array_stddev_samp",
+ "Array functions - array_sum",
+ "Array functions - dim_length",
+ "Array functions - dot_product",
+ "Array functions - flatten",
+ "Array functions - insertion_point",
+ "Array functions - matmul",
+ "Array functions - shift",
+ "Array functions - transpose",
+ "Binary functions",
+ "Binary functions - See also",
+ "Binary functions - base64",
+ "Boolean functions",
+ "Boolean functions - SELECT boolean expressions",
+ "Boolean functions - isOrdered",
+ "Conditional functions",
+ "Conditional functions - case",
+ "Conditional functions - coalesce",
+ "Conditional functions - nullif",
+ "Finance functions",
+ "Finance functions - l2price",
+ "Finance functions - mid",
+ "Finance functions - regr_intercept",
+ "Finance functions - regr_slope",
+ "Finance functions - spread_bps",
+ "Finance functions - vwap",
+ "Finance functions - wmid",
+ "Geospatial functions",
+ "Geospatial functions - make_geohash",
+ "Geospatial functions - rnd_geohash",
+ "Hash Functions",
+ "Hash Functions - Function reference",
+ "Hash Functions - Notes and restrictions",
+ "Hash Functions - Supported functions",
+ "JSON functions",
+ "JSON functions - json_extract",
+ "Meta functions",
+ "Meta functions - build",
+ "Meta functions - current database, schema, or user",
+ "Meta functions - flush_query_cache()",
+ "Meta functions - functions",
+ "Meta functions - hydrate_table_metadata('table1', 'table2' ...)",
+ "Meta functions - materialized_views",
+ "Meta functions - memory_metrics",
+ "Meta functions - query_activity",
+ "Meta functions - reader_pool",
+ "Meta functions - reload_config()",
+ "Meta functions - table_columns",
+ "Meta functions - table_partitions",
+ "Meta functions - table_storage",
+ "Meta functions - tables",
+ "Meta functions - version/pg_catalog.version",
+ "Meta functions - wal_tables",
+ "Meta functions - writer_pool",
+ "Numeric functions",
+ "Numeric functions - abs",
+ "Numeric functions - ceil / ceiling",
+ "Numeric functions - exp",
+ "Numeric functions - floor",
+ "Numeric functions - greatest",
+ "Numeric functions - least",
+ "Numeric functions - ln",
+ "Numeric functions - log",
+ "Numeric functions - power",
+ "Numeric functions - round",
+ "Numeric functions - round_down",
+ "Numeric functions - round_half_even",
+ "Numeric functions - round_up",
+ "Numeric functions - sign",
+ "Numeric functions - size_pretty",
+ "Numeric functions - sqrt",
+ "Parquet functions",
+ "Parquet functions - read_parquet",
+ "Pattern matching operators",
+ "Pattern matching operators - LIKE/ILIKE",
+ "Pattern matching operators - regexp_replace",
+ "Pattern matching operators - ~ (match) and !~ (does not match)",
+ "Random value generator",
+ "Random value generator - Generating sequences",
+ "Random value generator - Usage",
+ "Random value generator - rnd_bin",
+ "Random value generator - rnd_boolean",
+ "Random value generator - rnd_byte",
+ "Random value generator - rnd_char",
+ "Random value generator - rnd_date()",
+ "Random value generator - rnd_double",
+ "Random value generator - rnd_double_array()",
+ "Random value generator - rnd_float",
+ "Random value generator - rnd_int",
+ "Random value generator - rnd_ipv4()",
+ "Random value generator - rnd_ipv4(string, int)",
+ "Random value generator - rnd_long",
+ "Random value generator - rnd_long256",
+ "Random value generator - rnd_short",
+ "Random value generator - rnd_str",
+ "Random value generator - rnd_symbol",
+ "Random value generator - rnd_timestamp()",
+ "Random value generator - rnd_uuid4",
+ "Random value generator - rnd_varchar",
+ "Row generator",
+ "Row generator - generate_series",
+ "Row generator - long_sequence",
+ "Text functions",
+ "Text functions - concat",
+ "Text functions - left",
+ "Text functions - length",
+ "Text functions - lpad",
+ "Text functions - ltrim",
+ "Text functions - quote_ident",
+ "Text functions - replace",
+ "Text functions - right",
+ "Text functions - rtrim",
+ "Text functions - split_part",
+ "Text functions - starts_with",
+ "Text functions - string_agg",
+ "Text functions - strpos / position",
+ "Text functions - substring",
+ "Text functions - to_lowercase / lower",
+ "Text functions - to_uppercase / upper",
+ "Text functions - trim",
+ "Timestamp function",
+ "Timestamp function - Optimization with WHERE clauses",
+ "Timestamp function - Syntax",
+ "Timestamp generator",
+ "Timestamp generator - generate_series",
+ "Timestamp generator - timestamp_sequence",
+ "Timestamp, date and time functions",
+ "Timestamp, date and time functions - Timestamp format",
+ "Timestamp, date and time functions - Timestamp to Date conversion",
+ "Timestamp, date and time functions - date_trunc",
+ "Timestamp, date and time functions - dateadd",
+ "Timestamp, date and time functions - datediff",
+ "Timestamp, date and time functions - day",
+ "Timestamp, date and time functions - day_of_week",
+ "Timestamp, date and time functions - day_of_week_sunday_first",
+ "Timestamp, date and time functions - days_in_month",
+ "Timestamp, date and time functions - extract",
+ "Timestamp, date and time functions - hour",
+ "Timestamp, date and time functions - interval",
+ "Timestamp, date and time functions - interval_end",
+ "Timestamp, date and time functions - interval_start",
+ "Timestamp, date and time functions - is_leap_year",
+ "Timestamp, date and time functions - micros",
+ "Timestamp, date and time functions - millis",
+ "Timestamp, date and time functions - minute",
+ "Timestamp, date and time functions - month",
+ "Timestamp, date and time functions - now",
+ "Timestamp, date and time functions - pg_postmaster_start_time",
+ "Timestamp, date and time functions - second",
+ "Timestamp, date and time functions - sysdate",
+ "Timestamp, date and time functions - systimestamp",
+ "Timestamp, date and time functions - timestamp_ceil",
+ "Timestamp, date and time functions - timestamp_floor",
+ "Timestamp, date and time functions - timestamp_shuffle",
+ "Timestamp, date and time functions - to_date",
+ "Timestamp, date and time functions - to_str",
+ "Timestamp, date and time functions - to_timestamp",
+ "Timestamp, date and time functions - to_timezone",
+ "Timestamp, date and time functions - to_utc",
+ "Timestamp, date and time functions - today, tomorrow, yesterday",
+ "Timestamp, date and time functions - today, tomorrow, yesterday with timezone",
+ "Timestamp, date and time functions - week_of_year",
+ "Timestamp, date and time functions - year",
+ "Touch function",
+ "Trigonometric functions",
+ "Trigonometric functions - acos",
+ "Trigonometric functions - asin",
+ "Trigonometric functions - atan",
+ "Trigonometric functions - atan2",
+ "Trigonometric functions - cos",
+ "Trigonometric functions - cot",
+ "Trigonometric functions - degrees",
+ "Trigonometric functions - pi",
+ "Trigonometric functions - radians",
+ "Trigonometric functions - sin",
+ "Trigonometric functions - tan",
+ "UUID functions",
+ "UUID functions - to_uuid",
+ "Window Functions",
+ "Window Functions - Common window function examples",
+ "Window Functions - avg()",
+ "Window Functions - count()",
+ "Window Functions - dense_rank()",
+ "Window Functions - first_not_null_value()",
+ "Window Functions - first_value()",
+ "Window Functions - lag()",
+ "Window Functions - last_value()",
+ "Window Functions - lead()",
+ "Window Functions - max()",
+ "Window Functions - min()",
+ "Window Functions - rank()",
+ "Window Functions - row_number()",
+ "Window Functions - sum()"
+ ],
+ "operators": [
+ "Bitwise Operators",
+ "Bitwise Operators - `&` AND",
+ "Bitwise Operators - `^` XOR",
+ "Bitwise Operators - `|` OR",
+ "Bitwise Operators - `~` NOT",
+ "Comparison Operators",
+ "Comparison Operators - `<=` Lesser than or equal to",
+ "Comparison Operators - `<>` or `!=` Not equals",
+ "Comparison Operators - `<` Lesser than",
+ "Comparison Operators - `=` Equals",
+ "Comparison Operators - `>=` Greater than or equal to",
+ "Comparison Operators - `>` Greater than",
+ "Comparison Operators - `IN` (list)",
+ "Comparison Operators - `IN` (value1, value2, ...)",
+ "Date and Time Operators",
+ "Date and Time Operators - `BETWEEN` value1 `AND` value2",
+ "Date and Time Operators - `IN` (interval)",
+ "Date and Time Operators - `IN` (timeRange)",
+ "Date and Time Operators - `IN` (timeRangeWithModifier)",
+ "IPv4 Operators",
+ "IPv4 Operators - Return netmask - netmask(string)",
+ "IPv4 Operators - `!=` Does not equal",
+ "IPv4 Operators - `&` Bitwise AND",
+ "IPv4 Operators - `+` Add offset to an IP address",
+ "IPv4 Operators - `-` Difference between two IP addresses",
+ "IPv4 Operators - `-` Subtract offset from IP address",
+ "IPv4 Operators - `<<=` Left IP address contained by or equal",
+ "IPv4 Operators - `<<=` Right IP address contained by or equal",
+ "IPv4 Operators - `<<` Left strict IP address contained by",
+ "IPv4 Operators - `<=` Less than or equal",
+ "IPv4 Operators - `<` Less than",
+ "IPv4 Operators - `=` Equals",
+ "IPv4 Operators - `>=` Greater than or equal",
+ "IPv4 Operators - `>>` Right strict IP address contained by",
+ "IPv4 Operators - `>` Greater than",
+ "IPv4 Operators - `|` Bitwise OR",
+ "IPv4 Operators - `~` Bitwise NOT",
+ "Logical Operators",
+ "Logical Operators - `AND` Logical AND",
+ "Logical Operators - `NOT` Logical NOT",
+ "Logical Operators - `OR` Logical OR",
+ "Misc Operators",
+ "Misc Operators - `.` Prefix",
+ "Misc Operators - `::` Cast",
+ "Numeric Operators",
+ "Numeric Operators - `%` Modulo",
+ "Numeric Operators - `*` Multiply",
+ "Numeric Operators - `+` Add",
+ "Numeric Operators - `-` Negate",
+ "Numeric Operators - `-` Subtract",
+ "Numeric Operators - `/` Divide",
+ "Operator Precedence Table",
+ "Operator Precedence Table - Pre-8.0 notice",
+ "Spatial Operators",
+ "Text Operators",
+ "Text Operators - `!~` Regex doesn't match",
+ "Text Operators - `ILIKE`",
+ "Text Operators - `LIKE`",
+ "Text Operators - `||` Concat",
+ "Text Operators - `~` Regex match"
+ ],
+ "sql": [
+ "ADD USER reference",
+ "ADD USER reference - Description",
+ "ADD USER reference - Syntax",
+ "ALTER MATERIALIZED VIEW ADD INDEX",
+ "ALTER MATERIALIZED VIEW ADD INDEX - Syntax",
+ "ALTER MATERIALIZED VIEW ALTER COLUMN DROP INDEX",
+ "ALTER MATERIALIZED VIEW ALTER COLUMN DROP INDEX - Syntax",
+ "ALTER MATERIALIZED VIEW RESUME WAL",
+ "ALTER MATERIALIZED VIEW RESUME WAL - See also",
+ "ALTER MATERIALIZED VIEW RESUME WAL - Syntax",
+ "ALTER MATERIALIZED VIEW SET REFRESH",
+ "ALTER MATERIALIZED VIEW SET REFRESH - Description",
+ "ALTER MATERIALIZED VIEW SET REFRESH - Syntax",
+ "ALTER MATERIALIZED VIEW SET REFRESH LIMIT",
+ "ALTER MATERIALIZED VIEW SET REFRESH LIMIT - Description",
+ "ALTER MATERIALIZED VIEW SET REFRESH LIMIT - Syntax",
+ "ALTER MATERIALIZED VIEW SET TTL",
+ "ALTER MATERIALIZED VIEW SET TTL - Description",
+ "ALTER MATERIALIZED VIEW SET TTL - Syntax",
+ "ALTER MATERIALIZED VIEW SYMBOL CAPACITY",
+ "ALTER MATERIALIZED VIEW SYMBOL CAPACITY - Notes",
+ "ALTER MATERIALIZED VIEW SYMBOL CAPACITY - Syntax",
+ "ALTER SERVICE ACCOUNT reference",
+ "ALTER SERVICE ACCOUNT reference - Description",
+ "ALTER SERVICE ACCOUNT reference - Syntax",
+ "ALTER TABLE ADD COLUMN",
+ "ALTER TABLE ADD COLUMN - OWNED BY",
+ "ALTER TABLE ADD COLUMN - Syntax",
+ "ALTER TABLE ATTACH PARTITION",
+ "ALTER TABLE ATTACH PARTITION - Description",
+ "ALTER TABLE ATTACH PARTITION - Limitation",
+ "ALTER TABLE ATTACH PARTITION - Syntax",
+ "ALTER TABLE COLUMN ADD INDEX",
+ "ALTER TABLE COLUMN ADD INDEX - Syntax",
+ "ALTER TABLE COLUMN CACHE | NOCACHE",
+ "ALTER TABLE COLUMN CACHE | NOCACHE - Syntax",
+ "ALTER TABLE COLUMN DROP INDEX",
+ "ALTER TABLE COLUMN DROP INDEX - Syntax",
+ "ALTER TABLE COLUMN TYPE",
+ "ALTER TABLE COLUMN TYPE - Available Conversions",
+ "ALTER TABLE COLUMN TYPE - Supported Data Types",
+ "ALTER TABLE COLUMN TYPE - Syntax",
+ "ALTER TABLE COLUMN TYPE - Unsupported Conversions",
+ "ALTER TABLE DEDUP DISABLE",
+ "ALTER TABLE DEDUP DISABLE - Syntax",
+ "ALTER TABLE DEDUP ENABLE",
+ "ALTER TABLE DEDUP ENABLE - See also",
+ "ALTER TABLE DEDUP ENABLE - Syntax",
+ "ALTER TABLE DETACH PARTITION",
+ "ALTER TABLE DETACH PARTITION - Limitation",
+ "ALTER TABLE DETACH PARTITION - Syntax",
+ "ALTER TABLE DROP COLUMN",
+ "ALTER TABLE DROP COLUMN - Syntax",
+ "ALTER TABLE DROP PARTITION",
+ "ALTER TABLE DROP PARTITION - Drop partition by name",
+ "ALTER TABLE DROP PARTITION - Drop partitions using boolean expression",
+ "ALTER TABLE DROP PARTITION - Syntax",
+ "ALTER TABLE RENAME COLUMN",
+ "ALTER TABLE RENAME COLUMN - Syntax",
+ "ALTER TABLE RESUME WAL",
+ "ALTER TABLE RESUME WAL - Description",
+ "ALTER TABLE RESUME WAL - Diagnosing corrupted WAL transactions",
+ "ALTER TABLE RESUME WAL - Syntax",
+ "ALTER TABLE SET PARAM",
+ "ALTER TABLE SET PARAM - Syntax",
+ "ALTER TABLE SET TTL",
+ "ALTER TABLE SET TTL - Description",
+ "ALTER TABLE SET TTL - Syntax",
+ "ALTER TABLE SET TYPE",
+ "ALTER TABLE SET TYPE - Description",
+ "ALTER TABLE SET TYPE - Syntax",
+ "ALTER TABLE SQUASH PARTITIONS",
+ "ALTER TABLE SQUASH PARTITIONS - Syntax",
+ "ALTER TABLE SYMBOL CAPACITY",
+ "ALTER TABLE SYMBOL CAPACITY - Notes",
+ "ALTER TABLE SYMBOL CAPACITY - Syntax",
+ "ALTER USER reference",
+ "ALTER USER reference - Description",
+ "ALTER USER reference - Syntax",
+ "ASOF JOIN keyword",
+ "ASOF JOIN keyword - ASOF JOIN",
+ "ASOF JOIN keyword - JOIN overview",
+ "ASOF JOIN keyword - SPLICE JOIN",
+ "ASSUME SERVICE ACCOUNT reference",
+ "ASSUME SERVICE ACCOUNT reference - Syntax",
+ "CANCEL QUERY",
+ "CANCEL QUERY - Description",
+ "CANCEL QUERY - Syntax",
+ "CASE keyword",
+ "CASE keyword - Description",
+ "CASE keyword - Syntax",
+ "CAST keyword",
+ "CAST keyword - Alternate syntax",
+ "CAST keyword - Explicit conversion",
+ "CAST keyword - Implicit conversion",
+ "CAST keyword - Syntax",
+ "CHECKPOINT keyword",
+ "CHECKPOINT keyword - CHECKPOINT examples",
+ "CHECKPOINT keyword - CHECKPOINT overview",
+ "CHECKPOINT keyword - CHECKPOINT syntax",
+ "COPY keyword",
+ "COPY keyword - Description",
+ "COPY keyword - Options",
+ "COPY keyword - Syntax",
+ "CREATE GROUP reference",
+ "CREATE GROUP reference - Description",
+ "CREATE GROUP reference - Syntax",
+ "CREATE MATERIALIZED VIEW",
+ "CREATE MATERIALIZED VIEW - Alternative refresh strategies",
+ "CREATE MATERIALIZED VIEW - Base table",
+ "CREATE MATERIALIZED VIEW - Creating a view",
+ "CREATE MATERIALIZED VIEW - IF NOT EXISTS",
+ "CREATE MATERIALIZED VIEW - Initial refresh",
+ "CREATE MATERIALIZED VIEW - Materialized view names",
+ "CREATE MATERIALIZED VIEW - Metadata",
+ "CREATE MATERIALIZED VIEW - OWNED BY (Enterprise)",
+ "CREATE MATERIALIZED VIEW - Partitioning",
+ "CREATE MATERIALIZED VIEW - Period materialized views",
+ "CREATE MATERIALIZED VIEW - Query constraints",
+ "CREATE MATERIALIZED VIEW - SYMBOL column capacity",
+ "CREATE MATERIALIZED VIEW - Syntax",
+ "CREATE MATERIALIZED VIEW - Time To Live (TTL)",
+ "CREATE SERVICE ACCOUNT reference",
+ "CREATE SERVICE ACCOUNT reference - Description",
+ "CREATE SERVICE ACCOUNT reference - Syntax",
+ "CREATE TABLE reference",
+ "CREATE TABLE reference - CREATE TABLE AS",
+ "CREATE TABLE reference - CREATE TABLE LIKE",
+ "CREATE TABLE reference - Column indexes",
+ "CREATE TABLE reference - Column name",
+ "CREATE TABLE reference - Deduplication",
+ "CREATE TABLE reference - Designated timestamp",
+ "CREATE TABLE reference - IF NOT EXISTS",
+ "CREATE TABLE reference - OWNED BY",
+ "CREATE TABLE reference - Partitioning",
+ "CREATE TABLE reference - Syntax",
+ "CREATE TABLE reference - Table name",
+ "CREATE TABLE reference - Table target volume",
+ "CREATE TABLE reference - Time To Live (TTL)",
+ "CREATE TABLE reference - Type definition",
+ "CREATE TABLE reference - WITH table parameter",
+ "CREATE TABLE reference - Write-Ahead Log (WAL) Settings",
+ "CREATE USER reference",
+ "CREATE USER reference - Conditional user creation",
+ "CREATE USER reference - Description",
+ "CREATE USER reference - Syntax",
+ "DECLARE keyword",
+ "DECLARE keyword - Limitations",
+ "DECLARE keyword - Mechanics",
+ "DECLARE keyword - Syntax",
+ "DISTINCT keyword",
+ "DISTINCT keyword - Syntax",
+ "DROP GROUP reference",
+ "DROP GROUP reference - Description",
+ "DROP GROUP reference - Syntax",
+ "DROP MATERIALIZED VIEW",
+ "DROP MATERIALIZED VIEW - IF EXISTS",
+ "DROP MATERIALIZED VIEW - See also",
+ "DROP MATERIALIZED VIEW - Syntax",
+ "DROP SERVICE ACCOUNT reference",
+ "DROP SERVICE ACCOUNT reference - Description",
+ "DROP SERVICE ACCOUNT reference - Syntax",
+ "DROP TABLE keyword",
+ "DROP TABLE keyword - Description",
+ "DROP TABLE keyword - See also",
+ "DROP TABLE keyword - Syntax",
+ "DROP USER reference",
+ "DROP USER reference - Description",
+ "DROP USER reference - Syntax",
+ "Data types",
+ "Data types - IPv4",
+ "Data types - Limitations for variable-sized types",
+ "Data types - N-dimensional array",
+ "Data types - TIMESTAMP and DATE considerations",
+ "Data types - The UUID type",
+ "Data types - Type nullability",
+ "Data types - VARCHAR and STRING considerations",
+ "EXIT SERVICE ACCOUNT reference",
+ "EXIT SERVICE ACCOUNT reference - Syntax",
+ "EXPLAIN keyword",
+ "EXPLAIN keyword - Limitations:",
+ "EXPLAIN keyword - See also",
+ "EXPLAIN keyword - Syntax",
+ "FILL keyword",
+ "GRANT ASSUME SERVICE ACCOUNT reference",
+ "GRANT ASSUME SERVICE ACCOUNT reference - Description",
+ "GRANT ASSUME SERVICE ACCOUNT reference - Syntax",
+ "GRANT reference",
+ "GRANT reference - Description",
+ "GRANT reference - Syntax",
+ "GROUP BY keyword",
+ "GROUP BY keyword - Syntax",
+ "INSERT keyword",
+ "INSERT keyword - Syntax",
+ "JOIN keyword",
+ "JOIN keyword - (INNER) JOIN",
+ "JOIN keyword - ASOF JOIN",
+ "JOIN keyword - CROSS JOIN",
+ "JOIN keyword - Execution order",
+ "JOIN keyword - Implicit joins",
+ "JOIN keyword - LEFT (OUTER) JOIN",
+ "JOIN keyword - LT JOIN",
+ "JOIN keyword - SPLICE JOIN",
+ "JOIN keyword - Syntax",
+ "JOIN keyword - Using the `ON` clause for the `JOIN` predicate",
+ "LATEST ON keyword",
+ "LATEST ON keyword - Description",
+ "LATEST ON keyword - Syntax",
+ "LIMIT keyword",
+ "LIMIT keyword - Syntax",
+ "ORDER BY keyword",
+ "ORDER BY keyword - Notes",
+ "ORDER BY keyword - Syntax",
+ "Over Keyword - Window Functions",
+ "Over Keyword - Window Functions - Components of a window function",
+ "Over Keyword - Window Functions - Deep Dive: What is a Window Function?",
+ "Over Keyword - Window Functions - Exclusion options",
+ "Over Keyword - Window Functions - Frame boundaries",
+ "Over Keyword - Window Functions - Frame types and behavior",
+ "Over Keyword - Window Functions - Notes and restrictions",
+ "Over Keyword - Window Functions - Supported functions",
+ "Over Keyword - Window Functions - Syntax",
+ "Query & SQL Overview",
+ "Query & SQL Overview - Apache Parquet",
+ "Query & SQL Overview - PostgreSQL",
+ "Query & SQL Overview - QuestDB Web Console",
+ "Query & SQL Overview - REST HTTP API",
+ "Query & SQL Overview - What's next?",
+ "REFRESH MATERIALIZED VIEW",
+ "REFRESH MATERIALIZED VIEW - See also",
+ "REFRESH MATERIALIZED VIEW - Syntax",
+ "REINDEX",
+ "REINDEX - Options",
+ "REINDEX - Syntax",
+ "REMOVE USER reference",
+ "REMOVE USER reference - Syntax",
+ "RENAME TABLE keyword",
+ "RENAME TABLE keyword - Syntax",
+ "REVOKE ASSUME SERVICE ACCOUNT reference",
+ "REVOKE ASSUME SERVICE ACCOUNT reference - Description",
+ "REVOKE ASSUME SERVICE ACCOUNT reference - Syntax",
+ "REVOKE reference",
+ "REVOKE reference - Description",
+ "REVOKE reference - Syntax",
+ "SAMPLE BY keyword",
+ "SAMPLE BY keyword - ALIGN TO CALENDAR",
+ "SAMPLE BY keyword - ALIGN TO FIRST OBSERVATION",
+ "SAMPLE BY keyword - FROM-TO",
+ "SAMPLE BY keyword - Fill options",
+ "SAMPLE BY keyword - Performance optimization",
+ "SAMPLE BY keyword - Sample calculation",
+ "SAMPLE BY keyword - Sample units",
+ "SAMPLE BY keyword - See also",
+ "SAMPLE BY keyword - Syntax",
+ "SELECT keyword",
+ "SELECT keyword - Additional time-series clauses",
+ "SELECT keyword - Aggregation",
+ "SELECT keyword - Boolean expressions",
+ "SELECT keyword - Simple select",
+ "SELECT keyword - Supported clauses",
+ "SELECT keyword - Syntax",
+ "SHOW keyword",
+ "SHOW keyword - Description",
+ "SHOW keyword - See also",
+ "SHOW keyword - Syntax",
+ "SNAPSHOT keyword",
+ "SNAPSHOT keyword - Syntax",
+ "TRUNCATE TABLE keyword",
+ "TRUNCATE TABLE keyword - Notes",
+ "TRUNCATE TABLE keyword - See also",
+ "TRUNCATE TABLE keyword - Syntax",
+ "UNION EXCEPT INTERSECT keywords",
+ "UNION EXCEPT INTERSECT keywords - Alias",
+ "UNION EXCEPT INTERSECT keywords - Clauses",
+ "UNION EXCEPT INTERSECT keywords - Keyword execution priority",
+ "UNION EXCEPT INTERSECT keywords - Syntax",
+ "UPDATE keyword",
+ "UPDATE keyword - Syntax",
+ "VACUUM TABLE",
+ "VACUUM TABLE - Description",
+ "VACUUM TABLE - Syntax",
+ "WHERE keyword",
+ "WHERE keyword - Boolean",
+ "WHERE keyword - Numeric",
+ "WHERE keyword - Symbol and string",
+ "WHERE keyword - Syntax",
+ "WHERE keyword - Timestamp and date",
+ "WITH keyword",
+ "WITH keyword - Syntax"
+ ]
+}
diff --git a/packages/web-console/src/utils/questdbDocsRetrieval.ts b/packages/web-console/src/utils/questdbDocsRetrieval.ts
new file mode 100644
index 000000000..e960ba9f4
--- /dev/null
+++ b/packages/web-console/src/utils/questdbDocsRetrieval.ts
@@ -0,0 +1,172 @@
+// Import pre-generated documentation data
+import { functionsDocs, operatorsDocs, sqlDocs, questdbTocList } from './questdb-docs-data'
+import type { DocFile } from './questdb-docs-data/functions-docs'
+
+export type DocCategory = 'functions' | 'operators' | 'sql'
+
+// Type the imported data
+const docsData: Record = {
+ functions: functionsDocs,
+ operators: operatorsDocs,
+ sql: sqlDocs
+}
+
+/**
+ * Get the table of contents for all QuestDB documentation
+ */
+export function getQuestDBTableOfContents(): string {
+ const toc = questdbTocList as Record
+
+ let result = '# QuestDB Documentation Table of Contents\n\n'
+
+ // Functions
+ result += '## Functions\n'
+ result += toc.functions.join(', ') + '\n\n'
+
+ // Operators
+ result += '## Operators\n'
+ result += toc.operators.join(', ') + '\n\n'
+
+ // SQL Keywords\n'
+ result += '## SQL Syntax & Keywords\n'
+ result += toc.sql.join(', ') + '\n'
+
+ return result
+}
+
+/**
+ * Get documentation for specific items
+ */
+export function getSpecificDocumentation(category: DocCategory, items: string[]): string {
+ const categoryDocs = docsData[category]
+ if (!categoryDocs) {
+ return `Unknown category: ${category}`
+ }
+
+ const chunks: string[] = []
+ const processedPaths = new Set()
+
+ for (const item of items) {
+ const normalizedItem = item.toLowerCase().replace(/[^a-z0-9_]/g, '_')
+ const parts = item.split(/\s+-\s+/)
+ const hasTitleAndSection = parts.length >= 2
+ const queryTitle = hasTitleAndSection ? parts[0].trim() : null
+ const querySection = hasTitleAndSection ? parts.slice(1).join(' - ').trim() : null
+
+ // Find files containing this item
+ for (const file of categoryDocs) {
+ // Handle explicit "Title - Section" lookups
+ if (hasTitleAndSection && queryTitle && querySection) {
+ if (file.title.toLowerCase() === queryTitle.toLowerCase()) {
+ const matchingHeaderFromTitleSection = file.headers.find(h =>
+ h.toLowerCase() === querySection.toLowerCase() ||
+ h.toLowerCase().replace(/[^a-z0-9_]/g, '_') === querySection.toLowerCase().replace(/[^a-z0-9_]/g, '_')
+ )
+ if (matchingHeaderFromTitleSection && !processedPaths.has(`${file.path}::${matchingHeaderFromTitleSection}`)) {
+ processedPaths.add(`${file.path}::${matchingHeaderFromTitleSection}`)
+ const sectionContent = extractSection(file.content, matchingHeaderFromTitleSection)
+ if (sectionContent) {
+ chunks.push(`### ${file.path} - ${matchingHeaderFromTitleSection}\n\n${sectionContent}`)
+ continue
+ }
+ }
+ }
+ }
+
+ // Check if file name matches
+ const fileKey = file.path.split('/').pop()?.replace('.md', '').replace(/-/g, '_')
+ const hasItemInPath = fileKey === normalizedItem
+
+ // Check if any header matches
+ const hasItemInHeaders = file.headers.some(h =>
+ h.toLowerCase().replace(/[^a-z0-9_]/g, '_') === normalizedItem ||
+ h.toLowerCase() === item.toLowerCase()
+ )
+
+ if ((hasItemInPath || hasItemInHeaders) && !processedPaths.has(file.path)) {
+ processedPaths.add(file.path)
+
+ // If looking for a specific function/operator, try to extract just that section
+ const matchingHeader = file.headers.find(h =>
+ h.toLowerCase() === item.toLowerCase() ||
+ h.toLowerCase().replace(/[^a-z0-9_]/g, '_') === normalizedItem
+ )
+
+ if (matchingHeader) {
+ const sectionContent = extractSection(file.content, matchingHeader)
+ if (sectionContent) {
+ chunks.push(`### ${file.path} - ${matchingHeader}\n\n${sectionContent}`)
+ continue
+ }
+ }
+
+ // Otherwise include the whole file
+ chunks.push(`### ${file.path}\n\n${file.content}`)
+ }
+ }
+ }
+
+ if (chunks.length === 0) {
+ return `No documentation found for: ${items.join(', ')}`
+ }
+
+ return chunks.join('\n\n---\n\n')
+}
+
+/**
+ * Extract a specific section from markdown content
+ */
+function extractSection(content: string, sectionHeader: string): string | null {
+ const lines = content.split('\n')
+ let inSection = false
+ const sectionContent: string[] = []
+
+ for (let i = 0; i < lines.length; i++) {
+ const line = lines[i]
+
+ // Check if we found the section header
+ if (line === `## ${sectionHeader}` || line === `### ${sectionHeader}`) {
+ inSection = true
+ sectionContent.push(line)
+ } else if (inSection) {
+ // Check if we reached the next section
+ if (line.match(/^###?\s/)) {
+ break
+ }
+ sectionContent.push(line)
+ }
+ }
+
+ return sectionContent.length > 0 ? sectionContent.join('\n') : null
+}
+
+/**
+ * Search for documentation by keyword
+ */
+export function searchDocumentation(query: string): string {
+ const lowerQuery = query.toLowerCase()
+ const results: string[] = []
+
+ // Search in all categories
+ for (const [category, docs] of Object.entries(docsData)) {
+ for (const file of docs) {
+ // Check file name
+ if (file.path.toLowerCase().includes(lowerQuery)) {
+ results.push(`${category}/${file.title}`)
+ }
+
+ // Check headers
+ for (const header of file.headers) {
+ if (header.toLowerCase().includes(lowerQuery)) {
+ results.push(`${category}/${header}`)
+ }
+ }
+ }
+ }
+
+ if (results.length === 0) {
+ return `No results found for: ${query}`
+ }
+
+ return `Found ${results.length} results:\n${results.join('\n')}`
+}
\ No newline at end of file
diff --git a/yarn.lock b/yarn.lock
index 7086a5c18..c9e147cf6 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -206,6 +206,15 @@ __metadata:
languageName: node
linkType: hard
+"@anthropic-ai/sdk@npm:^0.57.0":
+ version: 0.57.0
+ resolution: "@anthropic-ai/sdk@npm:0.57.0"
+ bin:
+ anthropic-ai-sdk: bin/cli
+ checksum: 10/3ff430ded97067467e1731acd906a5ad2e5dfd2f0283ce0ce90f292e7ec57f5ddfdc76094c093f141eac272f6038d9780f7516468bfda0128fb25db6078d041d
+ languageName: node
+ linkType: hard
+
"@babel/cli@npm:^7.17.10":
version: 7.23.0
resolution: "@babel/cli@npm:7.23.0"
@@ -3228,6 +3237,7 @@ __metadata:
version: 0.0.0-use.local
resolution: "@questdb/web-console@workspace:packages/web-console"
dependencies:
+ "@anthropic-ai/sdk": "npm:^0.57.0"
"@babel/cli": "npm:^7.17.10"
"@babel/core": "npm:^7.20.12"
"@babel/preset-env": "npm:^7.20.2"