-
Notifications
You must be signed in to change notification settings - Fork 1.1k
TLDR Adapter config table (we'll be speaking with Benoit about this PR) #7163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
May need to add setting row policies to Snowflake config page: https://github.com/dbt-labs/docs.getdbt.com/pull/7162/files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May need to add setting row policies to Snowflake config page: https://github.com/dbt-labs/docs.getdbt.com/pull/7162/files
|[Transient tables](/reference/resource-configs/snowflake-configs#transient-tables)|Transient tables allow time travel for 1 day, with no fail-safe period. By default, dbt creates all Snowflake tables as transient.| | ||
|[Query tags](/reference/resource-configs/snowflake-configs#query-tags)|Snowflake parameter that can be quite useful when searching in the `QUERY_HISTORY` view|. | ||
|[Merge behavior (incremental models)](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models)|The `incremental_strategy` config determines how dbt builds incremental models. By default, dbt uses a merge statement on Snowflake to refresh these tables. The Snowflake adapter supports the following incremental materialization strategies — `append`, `delete+insert`, `insert_overwrite`, `merge` and [`microbatch`](/docs/build/incremental-microbatch).| | ||
|[`cluster_by`](/reference/resource-configs/snowflake-configs#configuring-table-clustering)|Use the `cluster_by` config to control clustering for a table or incremental model. It orders the table by the specified fields and adds the clustering keys to the target table.| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't include automatic_clustering -- should it? this is tied to my previous feedback about consolidating some of the rows so it matches the headers?
|[Incremental models](/reference/resource-configs/databricks-configs#incremental-models-1)|The `incremental_strategy` config determines how dbt builds incremental models. The dbt-databricks plugin supports the following incremental materialization strategies — `append`, `insert_overwrite`, `merge`, [`microbatch`](/docs/build/incremental-microbatch) and `replace_where`.| | ||
|[Selecting compute per model](/reference/resource-configs/databricks-configs#selecting-compute-per-model)|From v1.7.2, you can assign which compute resource to use on a per-model basis. | | ||
|[`persist_docs`](/reference/resource-configs/databricks-configs#persisting-model-descriptions)|When the `persist_docs` is configured correctly, model descriptions will appear in the `Comment` field of `describe [table] extended` or `show table extended in [database] like '*'`.| | ||
|[Default file format configurations](/reference/resource-configs/databricks-configs#default-file-format-configurations)|Use the Delta or Hudi file format as the default file format to use advanced incremental strategies features.| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to explicitly share the config?
|[Default file format configurations](/reference/resource-configs/databricks-configs#default-file-format-configurations)|Use the Delta or Hudi file format as the default file format to use advanced incremental strategies features.| | |
|[Default file format configurations](/reference/resource-configs/databricks-configs#default-file-format-configurations)| Use the Delta or Hudi file format (`file_format`) as the default file format to use advanced incremental strategies features.| |
| Configuration | Description | | ||
|------------------|----------------| | ||
|[Iceberg table format](/reference/resource-configs/snowflake-configs#iceberg-table-format)|The dbt-snowflake adapter supports the Iceberg table format and is available for three of the Snowflake materializations [table](/docs/build/materializations#table), [incremental](/docs/build/materializations#incremental) and [dynamic tables](/reference/resource-configs/snowflake-configs#dynamic-tables)| | ||
|[Dynamic tables](/reference/resource-configs/snowflake-configs#dynamic-tables)|Specific to Snowflake but follows the implementation of [materialized views](/docs/build/materializations#Materialized-View).| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we give some more info here?
e.g. use the Snowflake-specific dynamic materialization to create dynamic tables. Supports settings like target_lag, refresh_mode, and on_configuration_change.
|[Merge behavior (incremental models)](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models)|The `incremental_strategy` config determines how dbt builds incremental models. By default, dbt uses a merge statement on Snowflake to refresh these tables. The Snowflake adapter supports the following incremental materialization strategies — `append`, `delete+insert`, `insert_overwrite`, `merge` and [`microbatch`](/docs/build/incremental-microbatch).| | ||
|[`cluster_by`](/reference/resource-configs/snowflake-configs#configuring-table-clustering)|Use the `cluster_by` config to control clustering for a table or incremental model. It orders the table by the specified fields and adds the clustering keys to the target table.| | ||
|[Configuring virtual warehouses](/reference/resource-configs/snowflake-configs#configuring-virtual-warehouses)|Use the `snowflake_warehouse` model configuration to override the warehouse that is used for specific models.| | ||
|[Copying grants](/reference/resource-configs/snowflake-configs#copying-grants)|`copy_grants` = `true', dbt adds the copy grants DDL qualifier when rebuilding tables and views. The default is false.| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is copy_grants
= true' the correct config? or is it
copy_grants: true` per example? also is the sentence missing a 'When copy_grants is true....etc.
|[`cluster_by`](/reference/resource-configs/snowflake-configs#configuring-table-clustering)|Use the `cluster_by` config to control clustering for a table or incremental model. It orders the table by the specified fields and adds the clustering keys to the target table.| | ||
|[Configuring virtual warehouses](/reference/resource-configs/snowflake-configs#configuring-virtual-warehouses)|Use the `snowflake_warehouse` model configuration to override the warehouse that is used for specific models.| | ||
|[Copying grants](/reference/resource-configs/snowflake-configs#copying-grants)|`copy_grants` = `true', dbt adds the copy grants DDL qualifier when rebuilding tables and views. The default is false.| | ||
|[Secure views](/reference/resource-configs/snowflake-configs#secure-views)|Use the `secure` config for view models which can be used to limit access to sensitive data.| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥
hey @nataliefiann , thnks for this pr and it was a quick turnaround! a few comments that applies to most of the adapters:
|
Co-authored-by: Mirna Wong <[email protected]>
Co-authored-by: Mirna Wong <[email protected]>
Co-authored-by: Mirna Wong <[email protected]>
Co-authored-by: Mirna Wong <[email protected]>
Co-authored-by: Mirna Wong <[email protected]>
Iceboxing this conversation for the time being. Will re-open when appropriate |
What are you changing in this pull request and why?
I have created this PR following this thread (https://app.slack.com/client/T0Z0T0223/C02NCQ9483C) raised by benoit who recommended a table at the top of the Snowflake configs page. Used this idea to update config pages for popular adapters and add a table with short description
I've added a TLDR table to the top of the BigQuery, Databricks, Postgres, Redshift and Snowflake config pages
Checklist
🚀 Deployment available! Here are the direct links to the updated files: