Skip to content

Commit 3165cfb

Browse files
committed
Add iceberg v2 consumer for flattened data consuming, schema will be inferred by arrow
1 parent 02ec652 commit 3165cfb

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,17 @@ The Iceberg handlers are designed to stream CDC events directly into Apache Iceb
2929

3030
* `IcebergChangeHandler`: A straightforward handler that appends change data to a source-equivalent Iceberg table using a predefined schema.
3131
* **Use Case**: Best for creating a "bronze" layer where you want to capture the raw Debezium event. The `before` and `after` payloads are stored as complete JSON strings.
32-
* **Schema**: Uses a fixed schema where complex nested fields (`source`, `before`, `after`) are stored as `StringType`. It also includes helpful metadata columns (`_consumed_at`, `_dbz_event_key`, `_dbz_event_key_hash`) for traceability.
33-
* With consuming data as json, all source syste schema changes will be absorbed automatically.
34-
* **Automatic Table Creation & Partitioning**: **It automatically creates a new Iceberg table for each source table** and partitions it by day on the `_consumed_at` timestamp for efficient time-series queries.
32+
* **Schema**: Uses a fixed schema where complex nested fields (`source`, `before`, `after`) are stored as `StringType`.
33+
* With consuming data as json, all source system schema changes will be absorbed automatically.
34+
* **Automatic Table Creation & Partitioning**: It automatically creates a new Iceberg table for each source table and partitions it by day on the `_consumed_at` timestamp for efficient time-series queries.
35+
* **Enriched Metadata**: It also adds `_consumed_at`, `_dbz_event_key`, and `_dbz_event_key_hash` columns for enhanced traceability.
3536

3637
* `IcebergChangeHandlerV2`: A more advanced handler that automatically infers the schema from the Debezium events and creates a well-structured Iceberg table accordingly.
3738
* **Use Case**: Ideal for scenarios where you want the pipeline to automatically create tables with native data types that mirror the source. This allows for direct querying of the data without needing to parse JSON.
3839
* **Schema and Features**:
3940
* **Automatic Schema Inference**: It inspects the first batch of records for a given table and infers the schema using PyArrow, preserving native data types (e.g., `LongType`, `TimestampType`).
4041
* **Robust Type Handling**: If a field's type cannot be inferred from the initial batch (e.g., it is always `null`), it safely falls back to `StringType` to prevent errors.
41-
* **Automatic Table Creation & Partitioning**: **It automatically creates a new Iceberg table for each source table** and partitions it by day on the `_consumed_at` timestamp for efficient time-series queries.
42+
* **Automatic Table Creation & Partitioning**: It automatically creates a new Iceberg table for each source table and partitions it by day on the `_consumed_at` timestamp for efficient time-series queries.
4243
* **Enriched Metadata**: It also adds `_consumed_at`, `_dbz_event_key`, and `_dbz_event_key_hash` columns for enhanced traceability.
4344

4445
### dlt (data load tool) Handler (`pydbzengine[dlt]`)

0 commit comments

Comments
 (0)