You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Capture processing logic as sql in [Silver transformation file](https://github.com/databrickslabs/dlt-meta/blob/main/examples/silver_transformations.json)
31
31
32
-
#### Generic DLT pipeline
32
+
#### Generic Lakeflow Declarative Pipeline
33
33
34
34
- Apply appropriate readers based on input metadata
35
35
- Apply data quality rules with DLT expectations
36
36
- Apply CDC apply changes if specified in metadata
37
-
- Builds DLT graph based on input/output metadata
38
-
- Launch DLT pipeline
37
+
- Builds Lakeflow Declarative Pipeline graph based on input/output metadata
38
+
- Launch Lakeflow Declarative Pipeline pipeline
39
39
40
40
## High-Level Process Flow:
41
41
@@ -53,14 +53,15 @@ In practice, a single generic pipeline reads the Dataflowspec and uses it to orc
| Data Quality Expecations Support | Bronze, Silver layer |
55
55
| Quarantine table support | Bronze layer |
56
-
|[apply_changes](https://docs.databricks.com/en/delta-live-tables/python-ref.html#cdc) API support | Bronze, Silver layer |
57
-
|[apply_changes_from_snapshot](https://docs.databricks.com/en/delta-live-tables/python-ref.html#change-data-capture-from-database-snapshots-with-python-in-delta-live-tables) API support | Bronze layer|
56
+
|[create_auto_cdc_flow](https://docs.databricks.com/aws/en/dlt-ref/dlt-python-ref-apply-changes) API support | Bronze, Silver layer |
57
+
|[create_auto_cdc_from_snapshot_flow](https://docs.databricks.com/aws/en/dlt-ref/dlt-python-ref-apply-changes-from-snapshot) API support | Bronze layer|
58
58
|[append_flow](https://docs.databricks.com/en/delta-live-tables/flows.html#use-append-flow-to-write-to-a-streaming-table-from-multiple-source-streams) API support | Bronze layer|
59
59
| Liquid cluster support | Bronze, Bronze Quarantine, Silver tables|
Above commands will prompt you to provide onboarding details. If you have cloned dlt-meta git repo then accept defaults which will launch config from demo folder.
149
+
The command will prompt you to provide onboarding details. If you have cloned the dlt-meta repository, you can accept the default values which will use the configuration from the demo folder.
1. Deploy Lakeflow Declarative pipeline with dlt-meta configuration like ```layer```, ```group```, ```dataflowSpec table details``` etc to your databricks workspace
170
+
2. Display message: ```dlt-meta pipeline={pipeline_id} created and launched with update_id={pipeline_update_id}, url=https://{databricks workspace url}/#joblist/pipelines/{pipeline_id}```
171
+
3. Pipline URL will automatically open in your defaul browser.
0 commit comments