You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| <aname="input_iam_workspace_groups"></a> [iam\_workspace\_groups](#input\_iam\_workspace\_groups)| Used to create workspace group. Map of group name and its parameters, such as users and service principals added to the group. Also possible to configure group entitlements. | <pre>map(object({<br/> user = optional(list(string))<br/> service_principal = optional(list(string))<br/> entitlements = optional(list(string))<br/> }))</pre> |`{}`| no |
407
408
| <aname="input_ip_addresses"></a> [ip\_addresses](#input\_ip\_addresses)| A map of IP address ranges |`map(string)`| <pre>{<br/> "all": "0.0.0.0/0"<br/>}</pre> | no |
408
409
| <aname="input_key_vault_secret_scope"></a> [key\_vault\_secret\_scope](#input\_key\_vault\_secret\_scope)| Object with Azure Key Vault parameters required for creation of Azure-backed Databricks Secret scope | <pre>list(object({<br/> name = string<br/> key_vault_id = string<br/> dns_name = string<br/> tenant_id = string<br/> }))</pre> |`[]`| no |
410
+
| <a name="input_lakebase_instance"></a> [lakebase\_instance](#input\_lakebase\_instance) | Map of objects with parameters to configure and deploy OLTP database instances in Databricks.<br/>To deploy and use an OLTP database instance in Databricks:<br/>- You must be a Databricks workspace owner.<br/>- A Databricks workspace must already be deployed in your cloud environment (e.g., AWS or Azure).<br/>- The workspace must be on the Premium plan or above.<br/>- You must enable the "Lakebase: Managed Postgres OLTP Database" feature in the Preview features section.<br/>- Database instances can only be deleted manually through the Databricks UI or using the Databricks CLI with the --purge option. | <pre>map(object({<br/> name = string<br/> capacity = optional(string, "CU_1")<br/> node_count = optional(number, 1)<br/> enable_readable_secondaries = optional(bool, false)<br/> retention_window_in_days = optional(number, 7)<br/> }))</pre> | `{}` | no |
409
411
| <aname="input_mount_configuration"></a> [mount\_configuration](#input\_mount\_configuration)| Configuration for mounting storage, including only service principal details | <pre>object({<br/> service_principal = object({<br/> client_id = string<br/> client_secret = string<br/> tenant_id = string<br/> })<br/> })</pre> | <pre>{<br/> "service_principal": {<br/> "client_id": null,<br/> "client_secret": null,<br/> "tenant_id": null<br/> }<br/>}</pre> | no |
410
412
| <aname="input_mount_enabled"></a> [mount\_enabled](#input\_mount\_enabled)| Boolean flag that determines whether mount point for storage account filesystem is created |`bool`|`false`| no |
411
413
| <aname="input_mountpoints"></a> [mountpoints](#input\_mountpoints)| Mountpoints for databricks | <pre>map(object({<br/> storage_account_name = string<br/> container_name = string<br/> }))</pre> |`{}`| no |
0 commit comments