Skip to content

Commit 10aabbe

Browse files
authored
Merge branch 'main' into codeblock_syntax_highlighting
2 parents 22e7883 + 76973d5 commit 10aabbe

File tree

6 files changed

+388
-133
lines changed

6 files changed

+388
-133
lines changed

docs/cloud/manage/billing.md

Lines changed: 226 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,13 @@ title: 'Pricing'
55
description: 'Overview page for ClickHouse Cloud pricing'
66
---
77

8+
import ClickPipesFAQ from './jan2025_faq/_snippets/_clickpipes_faq.md'
9+
810
For pricing information, see the [ClickHouse Cloud Pricing](https://clickhouse.com/pricing#pricing-calculator) page.
911
ClickHouse Cloud bills based on the usage of compute, storage, [data transfer](/cloud/manage/network-data-transfer) (egress over the internet and cross-region), and [ClickPipes](/integrations/clickpipes).
1012
To understand what can affect your bill, and ways that you can manage your spend, keep reading.
1113

12-
## Amazon Web Services (AWS) Example {#amazon-web-services-aws-example}
14+
## Amazon Web Services (AWS) example {#amazon-web-services-aws-example}
1315

1416
:::note
1517
- Prices reflect AWS us-east-1 pricing.
@@ -70,7 +72,7 @@ Pricing breakdown for this example:
7072
</tbody>
7173
</table>
7274

73-
### Scale (Always-on, Auto-scaling): From $499.38 per month {#scale-always-on-auto-scaling-from-49938-per-month}
75+
### Scale (always-on, auto-scaling): from $499.38 per month {#scale-always-on-auto-scaling-from-49938-per-month}
7476

7577
Best for: workloads requiring enhanced SLAs (2+ replica services), scalability, and advanced security.
7678

@@ -98,9 +100,9 @@ Pricing breakdown for this example:
98100
</tr>
99101
<tr>
100102
<td>Storage</td>
101-
<td>1TB of data + 1 backup<br></br>\$50.60</td>
102-
<td>2TB of data + 1 backup<br></br>\$101.20</td>
103-
<td>3TB of data + 1 backup<br></br>\$151.80</td>
103+
<td>1 TB of data + 1 backup<br></br>\$50.60</td>
104+
<td>2 TB of data + 1 backup<br></br>\$101.20</td>
105+
<td>3 TB of data + 1 backup<br></br>\$151.80</td>
104106
</tr>
105107
<tr>
106108
<td>Public internet egress data transfer</td>
@@ -148,9 +150,9 @@ Best for: large scale, mission critical deployments that have stringent security
148150
</tr>
149151
<tr>
150152
<td>Storage</td>
151-
<td>5TB + 1 backup<br></br>\$253.00</td>
152-
<td>10TB + 1 backup<br></br>\$506.00</td>
153-
<td>20TB + 1 backup<br></br>\$1,012.00</td>
153+
<td>5 TB + 1 backup<br></br>\$253.00</td>
154+
<td>10 TB + 1 backup<br></br>\$506.00</td>
155+
<td>20 TB + 1 backup<br></br>\$1,012.00</td>
154156
</tr>
155157
<tr>
156158
<td>Public internet egress data transfer</td>
@@ -173,7 +175,7 @@ Best for: large scale, mission critical deployments that have stringent security
173175
</tbody>
174176
</table>
175177

176-
## FAQs {#faqs}
178+
## Frequently Asked Questions {#faqs}
177179

178180
### How is compute metered? {#how-is-compute-metered}
179181

@@ -193,8 +195,8 @@ Users who need additional backups can do so by configuring additional [backups](
193195

194196
### How do I estimate compression? {#how-do-i-estimate-compression}
195197

196-
Compression can vary quite a bit by dataset.
197-
It is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields),
198+
Compression can vary from dataset to dataset.
199+
How much it varies is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields),
198200
and how the user sets up the schema (using optional codecs or not, for instance).
199201
It can be on the order of 10x for common types of analytical data, but it can be significantly lower or higher as well.
200202
See the [optimizing documentation](/optimize/asynchronous-inserts) for guidance and this [Uber blog](https://www.uber.com/blog/logging/) for a detailed logging use case example.
@@ -227,7 +229,7 @@ Billing follows a monthly billing cycle and the start date is tracked as the dat
227229

228230
### What controls does ClickHouse Cloud offer to manage costs for Scale and Enterprise services? {#what-controls-does-clickhouse-cloud-offer-to-manage-costs-for-scale-and-enterprise-services}
229231

230-
- Trial and Annual Commit customers will be notified automatically by email when their consumption hits certain thresholds: `50%`, `75%`, and `90%`. This allows users to proactively manage their usage.
232+
- Trial and Annual Commit customers are notified automatically by email when their consumption hits certain thresholds: `50%`, `75%`, and `90%`. This allows users to proactively manage their usage.
231233
- ClickHouse Cloud allows users to set a maximum auto-scaling limit on their compute via [Advanced scaling control](/manage/scaling), a significant cost factor for analytical workloads.
232234
- The [Advanced scaling control](/manage/scaling) lets you set memory limits with an option to control the behavior of pausing/idling during inactivity.
233235

@@ -251,13 +253,13 @@ The ClickHouse Cloud console provides a Usage display that details usage per ser
251253

252254
### How do I access my invoice for my marketplace subscription to the ClickHouse Cloud service? {#how-do-i-access-my-invoice-for-my-marketplace-subscription-to-the-clickhouse-cloud-service}
253255

254-
All marketplace subscriptions will be billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
256+
All marketplace subscriptions are billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
255257

256258
### Why do the dates on the Usage statements not match my Marketplace Invoice? {#why-do-the-dates-on-the-usage-statements-not-match-my-marketplace-invoice}
257259

258260
AWS Marketplace billing follows the calendar month cycle.
259261
For example, for usage between dates 01-Dec-2024 and 01-Jan-2025,
260-
an invoice will be generated between 3-Jan and 5-Jan-2025
262+
an invoice is generated between 3-Jan and 5-Jan-2025
261263

262264
ClickHouse Cloud usage statements follow a different billing cycle where usage is metered
263265
and reported over 30 days starting from the day of sign up.
@@ -352,37 +354,137 @@ Cost estimation (per month) for this example on the **Scale Tier**:
352354
Without warehouses, you would have to pay for the amount of memory that the data engineer needs for his queries.
353355
However, combining two services in a warehouse and idling one of them helps you save money.
354356

355-
## ClickPipes Pricing {#clickpipes-pricing}
357+
## ClickPipes pricing {#clickpipes-pricing}
358+
359+
### ClickPipes for Postgres CDC {#clickpipes-for-postgres-cdc}
360+
361+
This section outlines the pricing model for our Postgres Change Data Capture (CDC)
362+
connector in ClickPipes. In designing this model, our goal was to keep pricing
363+
highly competitive while staying true to our core vision:
364+
365+
> Making it seamless and
366+
affordable for customers to move data from Postgres to ClickHouse for
367+
real-time analytics.
368+
369+
The connector is over **5x more cost-effective** than external
370+
ETL tools and similar features in other database platforms. $^*$
371+
372+
:::note
373+
Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
374+
for all customers (both existing and new) using Postgres CDC ClickPipes. Until
375+
then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
376+
to review and optimize their costs if needed, although we expect most will not need
377+
to make any changes.
378+
:::
379+
380+
$^*$ _For example, the external ETL tool Airbyte, which offers similar CDC capabilities,
381+
charges $10/GB (excluding credits)—more than 20 times the cost of Postgres CDC in
382+
ClickPipes for moving 1TB of data._
383+
384+
#### Pricing dimensions {#pricing-dimensions}
385+
386+
There are two main dimensions to pricing:
387+
388+
1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
389+
ingested into ClickHouse.
390+
2. **Compute**: The compute units provisioned per service manage multiple
391+
Postgres CDC ClickPipes and are separate from the compute units used by the
392+
ClickHouse Cloud service. This additional compute is dedicated specifically
393+
to Postgres CDC ClickPipes. Compute is billed at the service level, not per
394+
individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
395+
396+
#### Ingested data {#ingested-data}
397+
398+
The Postgres CDC connector operates in two main phases:
399+
400+
- **Initial load / resync**: This captures a full snapshot of Postgres tables
401+
and occurs when a pipe is first created or re-synced.
402+
- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
403+
updates, deletes, and schema changes—from Postgres to ClickHouse.
404+
405+
In most use cases, continuous replication accounts for over 90% of a ClickPipe
406+
life cycle. Because initial loads involve transferring a large volume of data all
407+
at once, we offer a lower rate for that phase.
408+
409+
| Phase | Cost |
410+
|----------------------------------|--------------|
411+
| **Initial load / resync** | $0.10 per GB |
412+
| **Continuous Replication (CDC)** | $0.20 per GB |
413+
414+
#### Compute {#compute}
356415

357-
### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
416+
This dimension covers the compute units provisioned per service just for Postgres
417+
ClickPipes. Compute is shared across all Postgres pipes within a service. **It
418+
is provisioned when the first Postgres pipe is created and deallocated when no
419+
Postgres CDC pipes remain**. The amount of compute provisioned depends on your
420+
organization’s tier:
421+
422+
| Tier | Cost |
423+
|------------------------------|-----------------------------------------------|
424+
| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
425+
| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
426+
427+
#### Example {#example}
428+
429+
Let’s say your service is in Scale tier and has the following setup:
430+
431+
- 2 Postgres ClickPipes running continuous replication
432+
- Each pipe ingests 500 GB of data changes (CDC) per month
433+
- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
434+
435+
##### Monthly cost breakdown {#cost-breakdown}
436+
437+
**Ingested Data (CDC)**:
438+
439+
$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
440+
441+
$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
442+
443+
**Compute**:
444+
445+
$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
446+
447+
:::note
448+
Compute is shared across both pipes
449+
:::
450+
451+
**Total Monthly Cost**:
452+
453+
$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
454+
455+
### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
456+
457+
This section outlines the pricing model of ClickPipes for streaming and object storage.
458+
459+
#### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
358460

359461
It consists of two dimensions
360462

361463
- **Compute**: Price per unit per hour
362-
Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
363-
It applies to all ClickPipes types.
464+
Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
465+
It applies to all ClickPipes types.
364466
- **Ingested data**: per GB pricing
365-
The ingested data rate applies to all streaming ClickPipes
366-
(Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
367-
for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
467+
The ingested data rate applies to all streaming ClickPipes
468+
(Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
469+
for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
368470

369-
### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
471+
#### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
370472

371-
ClickPipes ingests data from remote data sources via a dedicated infrastructure
372-
that runs and scales independently of the ClickHouse Cloud service.
473+
ClickPipes ingests data from remote data sources via a dedicated infrastructure
474+
that runs and scales independently of the ClickHouse Cloud service.
373475
For this reason, it uses dedicated compute replicas.
374476

375-
### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
477+
#### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
376478

377-
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
479+
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
378480
This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
379481

380-
### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
482+
#### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
381483

382484
- Compute: \$0.20 per unit per hour (\$0.05 per replica per hour)
383485
- Ingested data: \$0.04 per GB
384486

385-
### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
487+
#### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
386488

387489
The following examples assume a single replica unless explicitly mentioned.
388490

@@ -409,6 +511,101 @@ The following examples assume a single replica unless explicitly mentioned.
409511
</tbody>
410512
</table>
411513

412-
$^1$ _Only ClickPipes compute for orchestration,
514+
$^1$ _Only ClickPipes compute for orchestration,
413515
effective data transfer is assumed by the underlying Clickhouse Service_
414516

517+
## ClickPipes pricing FAQ {#clickpipes-pricing-faq}
518+
519+
Below, you will find frequently asked questions about CDC ClickPipes and streaming
520+
and object-based storage ClickPipes.
521+
522+
### FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe}
523+
524+
<details>
525+
526+
<summary>Is the ingested data measured in pricing based on compressed or uncompressed size?</summary>
527+
528+
The ingested data is measured as _uncompressed data_ coming from Postgres—both
529+
during the initial load and CDC (via the replication slot). Postgres does not
530+
compress data during transit by default, and ClickPipe processes the raw,
531+
uncompressed bytes.
532+
533+
</details>
534+
535+
<details>
536+
537+
<summary>When will Postgres CDC pricing start appearing on my bills?</summary>
538+
539+
Postgres CDC ClickPipes pricing begins appearing on monthly bills starting
540+
**September 1st, 2025**, for all customers—both existing and new. Until then,
541+
usage is free. Customers have a **3-month window** starting from **May 29**
542+
(the GA announcement date) to review and optimize their usage if needed, although
543+
we expect most won’t need to make any changes.
544+
545+
</details>
546+
547+
<details>
548+
549+
<summary>Will I be charged if I pause my pipes?</summary>
550+
551+
No data ingestion charges apply while a pipe is paused, since no data is moved.
552+
However, compute charges still apply—either 0.5 or 1 compute unit—based on your
553+
organization’s tier. This is a fixed service-level cost and applies across all
554+
pipes within that service.
555+
556+
</details>
557+
558+
<details>
559+
560+
<summary>How can I estimate my pricing?</summary>
561+
562+
The Overview page in ClickPipes provides metrics for both initial load/resync and
563+
CDC data volumes. You can estimate your Postgres CDC costs using these metrics
564+
in conjunction with the ClickPipes pricing.
565+
566+
</details>
567+
568+
<details>
569+
570+
<summary>Can I scale the compute allocated for Postgres CDC in my service?</summary>
571+
572+
By default, compute scaling is not user-configurable. The provisioned resources
573+
are optimized to handle most customer workloads optimally. If your use case
574+
requires more or less compute, please open a support ticket so we can evaluate
575+
your request.
576+
577+
</details>
578+
579+
<details>
580+
581+
<summary>What is the pricing granularity?</summary>
582+
583+
- **Compute**: Billed per hour. Partial hours are rounded up to the next hour.
584+
- **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data.
585+
586+
</details>
587+
588+
<details>
589+
590+
<summary>Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?</summary>
591+
592+
Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any
593+
platform credits you have will automatically apply to ClickPipes usage as well.
594+
595+
</details>
596+
597+
<details>
598+
599+
<summary>How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?</summary>
600+
601+
The cost varies based on your use case, data volume, and organization tier.
602+
That said, most existing customers see an increase of **0–15%** relative to their
603+
existing monthly ClickHouse Cloud spend post trial. Actual costs may vary
604+
depending on your workload—some workloads involve high data volumes with
605+
lesser processing, while others require more processing with less data.
606+
607+
</details>
608+
609+
### FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
610+
611+
<ClickPipesFAQ/>

0 commit comments

Comments
 (0)