You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: 'Overview page for ClickHouse Cloud pricing'
6
6
---
7
7
8
+
import ClickPipesFAQ from './jan2025_faq/_snippets/_clickpipes_faq.md'
9
+
8
10
For pricing information, see the [ClickHouse Cloud Pricing](https://clickhouse.com/pricing#pricing-calculator) page.
9
11
ClickHouse Cloud bills based on the usage of compute, storage, [data transfer](/cloud/manage/network-data-transfer) (egress over the internet and cross-region), and [ClickPipes](/integrations/clickpipes).
10
12
To understand what can affect your bill, and ways that you can manage your spend, keep reading.
11
13
12
-
## Amazon Web Services (AWS) Example {#amazon-web-services-aws-example}
14
+
## Amazon Web Services (AWS) example {#amazon-web-services-aws-example}
13
15
14
16
:::note
15
17
- Prices reflect AWS us-east-1 pricing.
@@ -70,7 +72,7 @@ Pricing breakdown for this example:
70
72
</tbody>
71
73
</table>
72
74
73
-
### Scale (Always-on, Auto-scaling): From $499.38 per month {#scale-always-on-auto-scaling-from-49938-per-month}
75
+
### Scale (always-on, auto-scaling): from $499.38 per month {#scale-always-on-auto-scaling-from-49938-per-month}
74
76
75
77
Best for: workloads requiring enhanced SLAs (2+ replica services), scalability, and advanced security.
76
78
@@ -98,9 +100,9 @@ Pricing breakdown for this example:
98
100
</tr>
99
101
<tr>
100
102
<td>Storage</td>
101
-
<td>1TB of data + 1 backup<br></br>\$50.60</td>
102
-
<td>2TB of data + 1 backup<br></br>\$101.20</td>
103
-
<td>3TB of data + 1 backup<br></br>\$151.80</td>
103
+
<td>1 TB of data + 1 backup<br></br>\$50.60</td>
104
+
<td>2 TB of data + 1 backup<br></br>\$101.20</td>
105
+
<td>3 TB of data + 1 backup<br></br>\$151.80</td>
104
106
</tr>
105
107
<tr>
106
108
<td>Public internet egress data transfer</td>
@@ -148,9 +150,9 @@ Best for: large scale, mission critical deployments that have stringent security
148
150
</tr>
149
151
<tr>
150
152
<td>Storage</td>
151
-
<td>5TB + 1 backup<br></br>\$253.00</td>
152
-
<td>10TB + 1 backup<br></br>\$506.00</td>
153
-
<td>20TB + 1 backup<br></br>\$1,012.00</td>
153
+
<td>5 TB + 1 backup<br></br>\$253.00</td>
154
+
<td>10 TB + 1 backup<br></br>\$506.00</td>
155
+
<td>20 TB + 1 backup<br></br>\$1,012.00</td>
154
156
</tr>
155
157
<tr>
156
158
<td>Public internet egress data transfer</td>
@@ -173,7 +175,7 @@ Best for: large scale, mission critical deployments that have stringent security
173
175
</tbody>
174
176
</table>
175
177
176
-
## FAQs {#faqs}
178
+
## Frequently Asked Questions {#faqs}
177
179
178
180
### How is compute metered? {#how-is-compute-metered}
179
181
@@ -193,8 +195,8 @@ Users who need additional backups can do so by configuring additional [backups](
193
195
194
196
### How do I estimate compression? {#how-do-i-estimate-compression}
195
197
196
-
Compression can vary quite a bit by dataset.
197
-
It is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields),
198
+
Compression can vary from dataset to dataset.
199
+
How much it varies is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields),
198
200
and how the user sets up the schema (using optional codecs or not, for instance).
199
201
It can be on the order of 10x for common types of analytical data, but it can be significantly lower or higher as well.
200
202
See the [optimizing documentation](/optimize/asynchronous-inserts) for guidance and this [Uber blog](https://www.uber.com/blog/logging/) for a detailed logging use case example.
@@ -227,7 +229,7 @@ Billing follows a monthly billing cycle and the start date is tracked as the dat
227
229
228
230
### What controls does ClickHouse Cloud offer to manage costs for Scale and Enterprise services? {#what-controls-does-clickhouse-cloud-offer-to-manage-costs-for-scale-and-enterprise-services}
229
231
230
-
- Trial and Annual Commit customers will be notified automatically by email when their consumption hits certain thresholds: `50%`, `75%`, and `90%`. This allows users to proactively manage their usage.
232
+
- Trial and Annual Commit customers are notified automatically by email when their consumption hits certain thresholds: `50%`, `75%`, and `90%`. This allows users to proactively manage their usage.
231
233
- ClickHouse Cloud allows users to set a maximum auto-scaling limit on their compute via [Advanced scaling control](/manage/scaling), a significant cost factor for analytical workloads.
232
234
- The [Advanced scaling control](/manage/scaling) lets you set memory limits with an option to control the behavior of pausing/idling during inactivity.
233
235
@@ -251,13 +253,13 @@ The ClickHouse Cloud console provides a Usage display that details usage per ser
251
253
252
254
### How do I access my invoice for my marketplace subscription to the ClickHouse Cloud service? {#how-do-i-access-my-invoice-for-my-marketplace-subscription-to-the-clickhouse-cloud-service}
253
255
254
-
All marketplace subscriptions will be billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
256
+
All marketplace subscriptions are billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
255
257
256
258
### Why do the dates on the Usage statements not match my Marketplace Invoice? {#why-do-the-dates-on-the-usage-statements-not-match-my-marketplace-invoice}
257
259
258
260
AWS Marketplace billing follows the calendar month cycle.
259
261
For example, for usage between dates 01-Dec-2024 and 01-Jan-2025,
260
-
an invoice will be generated between 3-Jan and 5-Jan-2025
262
+
an invoice is generated between 3-Jan and 5-Jan-2025
261
263
262
264
ClickHouse Cloud usage statements follow a different billing cycle where usage is metered
263
265
and reported over 30 days starting from the day of sign up.
@@ -352,37 +354,137 @@ Cost estimation (per month) for this example on the **Scale Tier**:
352
354
Without warehouses, you would have to pay for the amount of memory that the data engineer needs for his queries.
353
355
However, combining two services in a warehouse and idling one of them helps you save money.
354
356
355
-
## ClickPipes Pricing {#clickpipes-pricing}
357
+
## ClickPipes pricing {#clickpipes-pricing}
358
+
359
+
### ClickPipes for Postgres CDC {#clickpipes-for-postgres-cdc}
360
+
361
+
This section outlines the pricing model for our Postgres Change Data Capture (CDC)
362
+
connector in ClickPipes. In designing this model, our goal was to keep pricing
363
+
highly competitive while staying true to our core vision:
364
+
365
+
> Making it seamless and
366
+
affordable for customers to move data from Postgres to ClickHouse for
367
+
real-time analytics.
368
+
369
+
The connector is over **5x more cost-effective** than external
370
+
ETL tools and similar features in other database platforms. $^*$
371
+
372
+
:::note
373
+
Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
374
+
for all customers (both existing and new) using Postgres CDC ClickPipes. Until
375
+
then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
376
+
to review and optimize their costs if needed, although we expect most will not need
377
+
to make any changes.
378
+
:::
379
+
380
+
$^*$ _For example, the external ETL tool Airbyte, which offers similar CDC capabilities,
381
+
charges $10/GB (excluding credits)—more than 20 times the cost of Postgres CDC in
382
+
ClickPipes for moving 1TB of data._
383
+
384
+
#### Pricing dimensions {#pricing-dimensions}
385
+
386
+
There are two main dimensions to pricing:
387
+
388
+
1.**Ingested Data**: The raw, uncompressed bytes coming from Postgres and
389
+
ingested into ClickHouse.
390
+
2.**Compute**: The compute units provisioned per service manage multiple
391
+
Postgres CDC ClickPipes and are separate from the compute units used by the
392
+
ClickHouse Cloud service. This additional compute is dedicated specifically
393
+
to Postgres CDC ClickPipes. Compute is billed at the service level, not per
394
+
individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
395
+
396
+
#### Ingested data {#ingested-data}
397
+
398
+
The Postgres CDC connector operates in two main phases:
399
+
400
+
-**Initial load / resync**: This captures a full snapshot of Postgres tables
401
+
and occurs when a pipe is first created or re-synced.
402
+
-**Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
403
+
updates, deletes, and schema changes—from Postgres to ClickHouse.
404
+
405
+
In most use cases, continuous replication accounts for over 90% of a ClickPipe
406
+
life cycle. Because initial loads involve transferring a large volume of data all
for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
467
+
The ingested data rate applies to all streaming ClickPipes
for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
368
470
369
-
### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
471
+
####What are ClickPipes replicas? {#what-are-clickpipes-replicas}
370
472
371
-
ClickPipes ingests data from remote data sources via a dedicated infrastructure
372
-
that runs and scales independently of the ClickHouse Cloud service.
473
+
ClickPipes ingests data from remote data sources via a dedicated infrastructure
474
+
that runs and scales independently of the ClickHouse Cloud service.
373
475
For this reason, it uses dedicated compute replicas.
374
476
375
-
### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
477
+
####What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
376
478
377
-
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
479
+
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
378
480
This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
379
481
380
-
### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
482
+
####What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
381
483
382
484
- Compute: \$0.20 per unit per hour (\$0.05 per replica per hour)
383
485
- Ingested data: \$0.04 per GB
384
486
385
-
### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
487
+
####How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
386
488
387
489
The following examples assume a single replica unless explicitly mentioned.
388
490
@@ -409,6 +511,101 @@ The following examples assume a single replica unless explicitly mentioned.
409
511
</tbody>
410
512
</table>
411
513
412
-
$^1$ _Only ClickPipes compute for orchestration,
514
+
$^1$ _Only ClickPipes compute for orchestration,
413
515
effective data transfer is assumed by the underlying Clickhouse Service_
0 commit comments