Skip to content

Commit e937fd7

Browse files
authored
Merge branch 'main' into more_visibility_best_practices
2 parents eeebee0 + 37ad092 commit e937fd7

File tree

27 files changed

+936
-115
lines changed

27 files changed

+936
-115
lines changed

docs/cloud/bestpractices/index.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
slug: /cloud/bestpractices
3-
keywords: ['Cloud', 'Best Practices', 'Bulk Inserts', 'Asynchronous Inserts', 'Avoid Mutations', 'Avoid Nullable Columns', 'Avoid Optimize Final', 'Low Cardinality Partitioning Key']
3+
keywords: ['Cloud', 'Best Practices', 'Bulk Inserts', 'Asynchronous Inserts', 'Avoid Mutations', 'Avoid Nullable Columns', 'Avoid Optimize Final', 'Low Cardinality Partitioning Key', 'Multi Tenancy']
44
title: 'Overview'
55
hide_title: true
66
description: 'Landing page for Best Practices section in ClickHouse'
@@ -18,4 +18,5 @@ This section provides six best practices you will want to follow to get the most
1818
| [Avoid Nullable Columns](/cloud/bestpractices/avoid-nullable-columns) | Learn why you should ideally avoid Nullable columns |
1919
| [Avoid Optimize Final](/cloud/bestpractices/avoid-optimize-final) | Learn why you should avoid `OPTIMIZE TABLE ... FINAL` |
2020
| [Choose a Low Cardinality Partitioning Key](/cloud/bestpractices/low-cardinality-partitioning-key) | Learn how to choose a low cardinality partitioning key. |
21-
| [Usage Limits](/cloud/bestpractices/usage-limits)| Explore the limits of ClickHouse. |
21+
| [Usage Limits](/cloud/bestpractices/usage-limits)| Explore the limits of ClickHouse. |
22+
| [Multi tenancy](/cloud/bestpractices/multi-tenancy)| Learn about different strategies to implement multi-tenancy. |

docs/cloud/bestpractices/multitenancy.md

Lines changed: 378 additions & 0 deletions
Large diffs are not rendered by default.

docs/cloud/manage/api/api-reference-index.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,7 @@ description: 'Landing page for Cloud API'
88
if you've spotted an error or want to change something, please edit the YAML
99
frontmatter of the files themselves.
1010
-->
11+
| Page | Description |
12+
|-----|-----|
13+
| [ClickHouse Cloud API](/cloud/manage/api/api-overview) | Learn about ClickHouse Cloud API |
14+
| [Cloud API](/cloud/manage/api/) | Landing page for Cloud API |

docs/integrations/data-ingestion/clickpipes/postgres/index.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,17 +34,19 @@ To get started, you first need to make sure that your Postgres database is set u
3434

3535
1. [Amazon RDS Postgres](./postgres/source/rds)
3636

37-
2. [Supabase Postgres](./postgres/source/supabase)
37+
2. [Amazon Aurora Postgres](./postgres/source/aurora)
3838

39-
3. [Google Cloud SQL Postgres](./postgres/source/google-cloudsql)
39+
3. [Supabase Postgres](./postgres/source/supabase)
4040

41-
4. [Azure Flexible Server for Postgres](./postgres/source/azure-flexible-server-postgres)
41+
4. [Google Cloud SQL Postgres](./postgres/source/google-cloudsql)
4242

43-
5. [Neon Postgres](./postgres/source/neon-postgres)
43+
5. [Azure Flexible Server for Postgres](./postgres/source/azure-flexible-server-postgres)
4444

45-
6. [Crunchy Bridge Postgres](./postgres/source/crunchy-postgres)
45+
6. [Neon Postgres](./postgres/source/neon-postgres)
4646

47-
7. [Generic Postgres Source](./postgres/source/generic), if you are using any other Postgres provider or using a self-hosted instance
47+
7. [Crunchy Bridge Postgres](./postgres/source/crunchy-postgres)
48+
49+
8. [Generic Postgres Source](./postgres/source/generic), if you are using any other Postgres provider or using a self-hosted instance
4850

4951

5052
:::warning
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
---
2+
sidebar_label: 'Amazon Aurora Postgres'
3+
description: 'Set up Amazon Aurora Postgres as a source for ClickPipes'
4+
slug: /integrations/clickpipes/postgres/source/aurora
5+
title: 'Aurora Postgres Source Setup Guide'
6+
---
7+
8+
import parameter_group_in_blade from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/parameter_group_in_blade.png';
9+
import change_rds_logical_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/change_rds_logical_replication.png';
10+
import change_wal_sender_timeout from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/change_wal_sender_timeout.png';
11+
import modify_parameter_group from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/modify_parameter_group.png';
12+
import reboot_rds from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/reboot_rds.png';
13+
import security_group_in_rds_postgres from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/security_group_in_rds_postgres.png';
14+
import edit_inbound_rules from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/edit_inbound_rules.png';
15+
import Image from '@theme/IdealImage';
16+
17+
# Aurora Postgres Source Setup Guide
18+
19+
## Supported Postgres versions {#supported-postgres-versions}
20+
21+
ClickPipes supports Aurora PostgreSQL-Compatible Edition version 12 and later.
22+
23+
## Enable Logical Replication {#enable-logical-replication}
24+
25+
You can skip this section if your Aurora instance already has the following settings configured:
26+
- `rds.logical_replication = 1`
27+
- `wal_sender_timeout = 0`
28+
29+
These settings are typically pre-configured if you previously used another data replication tool.
30+
31+
```text
32+
postgres=> SHOW rds.logical_replication ;
33+
rds.logical_replication
34+
-------------------------
35+
on
36+
(1 row)
37+
38+
postgres=> SHOW wal_sender_timeout ;
39+
wal_sender_timeout
40+
--------------------
41+
0
42+
(1 row)
43+
```
44+
45+
If not already configured, follow these steps:
46+
47+
1. Create a new parameter group for your Aurora PostgreSQL version with the required settings:
48+
- Set `rds.logical_replication` to 1
49+
- Set `wal_sender_timeout` to 0
50+
51+
<Image img={parameter_group_in_blade} alt="Where to find Parameter groups in Aurora" size="lg" border/>
52+
53+
<Image img={change_rds_logical_replication} alt="Changing rds.logical_replication" size="lg" border/>
54+
55+
<Image img={change_wal_sender_timeout} alt="Changing wal_sender_timeout" size="lg" border/>
56+
57+
2. Apply the new parameter group to your Aurora PostgreSQL cluster
58+
59+
<Image img={modify_parameter_group} alt="Modifying Aurora PostgreSQL with new parameter group" size="lg" border/>
60+
61+
3. Reboot your Aurora cluster to apply the changes
62+
63+
<Image img={reboot_rds} alt="Reboot Aurora PostgreSQL" size="lg" border/>
64+
65+
## Configure Database User {#configure-database-user}
66+
67+
Connect to your Aurora PostgreSQL writer instance as an admin user and execute the following commands:
68+
69+
1. Create a dedicated user for ClickPipes:
70+
71+
```sql
72+
CREATE USER clickpipes_user PASSWORD 'some-password';
73+
```
74+
75+
2. Grant schema permissions. The following example shows permissions for the `public` schema. Repeat these commands for each schema you want to replicate:
76+
77+
```sql
78+
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
79+
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
80+
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
81+
```
82+
83+
3. Grant replication privileges:
84+
85+
```sql
86+
GRANT rds_replication TO clickpipes_user;
87+
```
88+
89+
4. Create a publication for replication:
90+
91+
```sql
92+
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
93+
```
94+
95+
96+
## Configure Network Access {#configure-network-access}
97+
98+
### IP-based Access Control {#ip-based-access-control}
99+
100+
If you want to restrict traffic to your Aurora cluster, please add the [documented static NAT IPs](../../index.md#list-of-static-ips) to the `Inbound rules` of your Aurora security group.
101+
102+
<Image img={security_group_in_rds_postgres} alt="Where to find security group in Aurora PostgreSQL?" size="lg" border/>
103+
104+
<Image img={edit_inbound_rules} alt="Edit inbound rules for the above security group" size="lg" border/>
105+
106+
### Private Access via AWS PrivateLink {#private-access-via-aws-privatelink}
107+
108+
To connect to your Aurora cluster through a private network, you can use AWS PrivateLink. Follow our [AWS PrivateLink setup guide for ClickPipes](/knowledgebase/aws-privatelink-setup-for-clickpipes) to set up the connection.
109+
110+
### Aurora-Specific Considerations {#aurora-specific-considerations}
111+
112+
When setting up ClickPipes with Aurora PostgreSQL, keep these considerations in mind:
113+
114+
1. **Connection Endpoint**: Always connect to the writer endpoint of your Aurora cluster, as logical replication requires write access to create replication slots and must connect to the primary instance.
115+
116+
2. **Failover Handling**: In the event of a failover, Aurora will automatically promote a reader to be the new writer. ClickPipes will detect the disconnection and attempt to reconnect to the writer endpoint, which will now point to the new primary instance.
117+
118+
3. **Global Database**: If you're using Aurora Global Database, you should connect to the primary region's writer endpoint, as cross-region replication already handles data movement between regions.
119+
120+
4. **Storage Considerations**: Aurora's storage layer is shared across all instances in a cluster, which can provide better performance for logical replication compared to standard RDS.
121+
122+
### Dealing with Dynamic Cluster Endpoints {#dealing-with-dynamic-cluster-endpoints}
123+
124+
While Aurora provides stable endpoints that automatically route to the appropriate instance, here are some additional approaches for ensuring consistent connectivity:
125+
126+
1. For high-availability setups, configure your application to use the Aurora writer endpoint, which automatically points to the current primary instance.
127+
128+
2. If using cross-region replication, consider setting up separate ClickPipes for each region to reduce latency and improve fault tolerance.
129+
130+
## What's next? {#whats-next}
131+
132+
You can now [create your ClickPipe](../index.md) and start ingesting data from your Aurora PostgreSQL cluster into ClickHouse Cloud.
133+
Make sure to note down the connection details you used while setting up your Aurora PostgreSQL cluster as you will need them during the ClickPipe creation process.
Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,118 @@
1+
---
2+
title: 'Connecting Chartbrew to ClickHouse'
3+
sidebar_label: 'Chartbrew'
4+
sidebar_position: 131
5+
slug: /integrations/chartbrew-and-clickhouse
6+
keywords: ['ClickHouse', 'Chartbrew', 'connect', 'integrate', 'visualization']
7+
description: 'Connect Chartbrew to ClickHouse to create real-time dashboards and client reports.'
8+
---
9+
10+
import chartbrew_01 from '@site/static/images/integrations/data-visualization/chartbrew_01.png';
11+
import chartbrew_02 from '@site/static/images/integrations/data-visualization/chartbrew_02.png';
12+
import chartbrew_03 from '@site/static/images/integrations/data-visualization/chartbrew_03.png';
13+
import chartbrew_04 from '@site/static/images/integrations/data-visualization/chartbrew_04.png';
14+
import chartbrew_05 from '@site/static/images/integrations/data-visualization/chartbrew_05.png';
15+
import chartbrew_06 from '@site/static/images/integrations/data-visualization/chartbrew_06.png';
16+
import chartbrew_07 from '@site/static/images/integrations/data-visualization/chartbrew_07.png';
17+
import chartbrew_08 from '@site/static/images/integrations/data-visualization/chartbrew_08.png';
18+
import chartbrew_09 from '@site/static/images/integrations/data-visualization/chartbrew_09.png';
19+
20+
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
21+
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
22+
import Image from '@theme/IdealImage';
23+
24+
# Connecting Chartbrew to ClickHouse
25+
26+
<CommunityMaintainedBadge/>
27+
28+
[Chartbrew](https://chartbrew.com) is a data visualization platform that allows users to create dashboards and monitor data in real time. It supports multiple data sources, including ClickHouse, and provides a no-code interface for building charts and reports.
29+
30+
## Goal {#goal}
31+
32+
In this guide, you will connect Chartbrew to ClickHouse, run a SQL query, and create a visualization. By the end, your dashboard may look something like this:
33+
34+
<Image img={chartbrew_01} size="lg" alt="Chartbrew dashboard" />
35+
36+
:::tip Add some data
37+
If you do not have a dataset to work with, you can add one of the examples. This guide uses the [UK Price Paid](/getting-started/example-datasets/uk-price-paid.md) dataset.
38+
:::
39+
40+
## 1. Gather your connection details {#1-gather-your-connection-details}
41+
42+
<ConnectionDetails />
43+
44+
## 2. Connect Chartbrew to ClickHouse {#2-connect-chartbrew-to-clickhouse}
45+
46+
1. Log in to [Chartbrew](https://chartbrew.com/login) and go to the **Connections** tab.
47+
2. Click **Create connection** and select **ClickHouse** from the available database options.
48+
49+
<Image img={chartbrew_02} size="lg" alt="Select ClickHouse connection in Chartbrew" />
50+
51+
3. Enter the connection details for your ClickHouse database:
52+
53+
- **Display Name**: A name to identify the connection in Chartbrew.
54+
- **Host**: The hostname or IP address of your ClickHouse server.
55+
- **Port**: Typically `8443` for HTTPS connections.
56+
- **Database Name**: The database you want to connect to.
57+
- **Username**: Your ClickHouse username.
58+
- **Password**: Your ClickHouse password.
59+
60+
<Image img={chartbrew_03} size="lg" alt="ClickHouse connection settings in Chartbrew" />
61+
62+
4. Click **Test connection** to verify that Chartbrew can connect to ClickHouse.
63+
5. If the test is successful, click **Save connection**. Chartbrew will automatically retrieve the schema from ClickHouse.
64+
65+
<Image img={chartbrew_04} size="lg" alt="ClickHouse JSON schema in Chartbrew" />
66+
67+
## 3. Create a dataset and run a SQL query {#3-create-a-dataset-and-run-a-sql-query}
68+
69+
1. Click on the **Create dataset** button or navigate to the **Datasets** tab to create one.
70+
2. Select the ClickHouse connection you created earlier.
71+
72+
<Image img={chartbrew_05} size="lg" alt="Select ClickHouse connection for dataset" />
73+
74+
Write a SQL query to retrieve the data you want to visualize. For example, this query calculates the average price paid per year from the `uk_price_paid` dataset:
75+
76+
```sql
77+
SELECT toYear(date) AS year, avg(price) AS avg_price
78+
FROM uk_price_paid
79+
GROUP BY year
80+
ORDER BY year;
81+
```
82+
83+
<Image img={chartbrew_07} size="lg" alt="ClickHouse SQL query in Chartbrew" />
84+
85+
Click **Run query** to fetch the data.
86+
87+
If you're unsure how to write the query, you can use **Chartbrew's AI assistant** to generate SQL queries based on your database schema.
88+
89+
<Image img={chartbrew_06} size="lg" alt="ClickHouse AI SQL assistant in Chartbrew" />
90+
91+
Once the data is retrieved, click **Configure dataset** to set up the visualization parameters.
92+
93+
## 4. Create a visualization {#4-create-a-visualization}
94+
95+
1. Define a metric (numerical value) and dimension (categorical value) for your visualization.
96+
2. Preview the dataset to ensure the query results are structured correctly.
97+
3. Choose a chart type (e.g., line chart, bar chart, pie chart) and add it to your dashboard.
98+
4. Click **Complete dataset** to finalize the setup.
99+
100+
<Image img={chartbrew_08} size="lg" alt="Chartbrew dashboard with ClickHouse data" />
101+
102+
You can create as many datasets as you want to visualize different aspects of your data. Using these datasets, you can create multiple dashboards to keep track of different metrics.
103+
104+
<Image img={chartbrew_01} size="lg" alt="Chartbrew dashboard with ClickHouse data" />
105+
106+
## 5. Automate data updates {#5-automate-data-updates}
107+
108+
To keep your dashboard up-to-date, you can schedule automatic data updates:
109+
110+
1. Click the Calendar icon next to the dataset refresh button.
111+
2. Configure the update interval (e.g., every hour, every day).
112+
3. Save the settings to enable automatic refresh.
113+
114+
<Image img={chartbrew_09} size="lg" alt="Chartbrew dataset refresh settings" />
115+
116+
## Learn more {#learn-more}
117+
118+
For more details, check out the blog post about [Chartbrew and ClickHouse](https://chartbrew.com/blog/visualizing-clickhouse-data-with-chartbrew-a-step-by-step-guide/).

docs/integrations/data-visualization/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ Now that your data is in ClickHouse, it's time to analyze it, which often involv
2626

2727
- [Apache Superset](./superset-and-clickhouse.md)
2828
- [Astrato](./astrato-and-clickhouse.md)
29+
- [Chartbrew](./chartbrew-and-clickhouse.md)
2930
- [Deepnote](./deepnote.md)
3031
- [Draxlr](./draxlr-and-clickhouse.md)
3132
- [Explo](./explo-and-clickhouse.md)
@@ -47,6 +48,7 @@ Now that your data is in ClickHouse, it's time to analyze it, which often involv
4748
| [Apache Superset](./superset-and-clickhouse.md) | ClickHouse official connector ||| |
4849
| [Astrato](./astrato-and-clickhouse.md) | Native connector ||| Works natively using pushdown SQL (direct query only). |
4950
| [AWS QuickSight](./quicksight-and-clickhouse.md) | MySQL interface ||| Works with some limitations, see [the documentation](./quicksight-and-clickhouse.md) for more details |
51+
| [Chartbrew](./chartbrew-and-clickhouse.md) | ClickHouse official connector ||| |
5052
| [Deepnote](./deepnote.md) | Native connector ||| |
5153
| [Explo](./explo-and-clickhouse.md) | Native connector ||| |
5254
| [Grafana](./grafana/index.md) | ClickHouse official connector ||| |

docs/integrations/index.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,7 @@ import great_expectations_logo from '@site/static/images/integrations/logos/grea
174174
import Hashboardsvg from '@site/static/images/integrations/logos/hashboard.svg';
175175
import luzmo_logo from '@site/static/images/integrations/logos/luzmo.png';
176176
import vs_logo from '@site/static/images/integrations/logos/logo_vs.png';
177+
import chartbrew_logo from '@site/static/images/integrations/logos/logo_chartbrew.png';
177178
import Image from '@theme/IdealImage';
178179

179180
ClickHouse integrations are organized by their support level:
@@ -258,6 +259,7 @@ We are actively compiling this list of ClickHouse integrations below, so it's no
258259
|BlinkOps|<Image img={blinkops_logo} size="logo" alt="BlinkOps Logo"/>|Security automation|Create automations to manage data and user permissions.|[Documentation](https://docs.blinkops.com/docs/integrations/clickhouse)|
259260
|Bytewax|<Bytewaxsvg alt="ByteWax Logo" style={{width: '3rem'}}/>|Data ingestion|Open source Python stream processor for transforming and ingesting data to ClickHouse|[Documentation](https://bytewax.io/blog/building-a-click-house-sink-for-bytewax)|
260261
|Calyptia (Fluent Bit)|<Image img={calyptia_logo} size="logo" alt="Calyptia logo"/>|Data ingestion|CNCF graduated open-source project for the collection, processing, and delivery of logs, metrics, and traces|[Blog](https://clickhouse.com/blog/kubernetes-logs-to-clickhouse-fluent-bit)|
262+
|Chartbrew|<Image img={chartbrew_logo} size="logo" alt="Chartbrew logo" style={{width: '3rem', 'backgroundColor': 'transparent', 'boxShadow': 'none'}}/>|Data visualization|Chartbrew is a data visualization platform that allows users to create dashboards and monitor data in real time.|[Documentation](/integrations/chartbrew-and-clickhouse),<br />[Website](https://chartbrew.com/integrations/clickhouse),<br />[Blog](https://chartbrew.com/blog/visualizing-clickhouse-data-with-chartbrew-a-step-by-step-guide/)|
261263
|CloudCanal|<Cloudcanalsvg className="image" alt="CloudCanal logo" style={{width: '3rem'}}/>|Data integration|A data synchronization and migration tool.|[Website](https://www.cloudcanalx.com/us/)|
262264
|CloudQuery|<Cloudquerysvg className="image" alt="CloudQuery logo" style={{width: '3rem'}}/>|Data ingestion|Open source high-performance ELT framework.|[Documentation](https://www.cloudquery.io/docs/plugins/destinations/clickhouse/overview)|
263265
|Cube.js|<Cubejssvg alt="Cubejs logo" style={{width: '3rem'}}/>|Data visualization|Cube is the Semantic Layer for building data apps.|[Website](https://cube.dev/for/clickhouse-dashboard)|

0 commit comments

Comments
 (0)