From 17a034d5480b8aeaae5ec4d5be8ff3d5532cfe06 Mon Sep 17 00:00:00 2001
From: Shaun Struwig <41984034+Blargian@users.noreply.github.com>
Date: Fri, 23 May 2025 11:32:24 +0200
Subject: [PATCH 1/6] update doc with clickpipes faqs
---
docs/cloud/manage/billing.md | 198 ++++++++++++++++--
.../jan2025_faq/_snippets/_clickpipes_faq.md | 144 +++++++++++++
docs/cloud/manage/jan2025_faq/dimensions.md | 99 +--------
3 files changed, 330 insertions(+), 111 deletions(-)
create mode 100644 docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md
diff --git a/docs/cloud/manage/billing.md b/docs/cloud/manage/billing.md
index f51fe1c1f6b..bda8d10f54a 100644
--- a/docs/cloud/manage/billing.md
+++ b/docs/cloud/manage/billing.md
@@ -5,6 +5,8 @@ title: 'Pricing'
description: 'Overview page for ClickHouse Cloud pricing'
---
+import ClickPipesFAQ from './jan2025_faq/_snippets/_clickpipes_faq.md'
+
For pricing information, see the [ClickHouse Cloud Pricing](https://clickhouse.com/pricing#pricing-calculator) page.
ClickHouse Cloud bills based on the usage of compute, storage, [data transfer](/cloud/manage/network-data-transfer) (egress over the internet and cross-region), and [ClickPipes](/integrations/clickpipes).
To understand what can affect your bill, and ways that you can manage your spend, keep reading.
@@ -354,35 +356,108 @@ However, combining two services in a warehouse and idling one of them helps you
## ClickPipes Pricing {#clickpipes-pricing}
-### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
+### ClickPipes for Postgres CDC {#clickpipes-for-postgres-cdc}
+
+This section outlines the pricing model for our Postgres Change Data Capture (CDC)
+connector in ClickPipes. In designing this model, our goal was to keep pricing
+highly competitive while staying true to our core vision:
+
+> Making it seamless and
+affordable for customers to move data from Postgres to ClickHouse for
+real-time analytics.
+
+The connector is over **5x more cost-effective** than external
+ETL tools and similar features in other database platforms. $^*$
+
+:::note
+Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
+for all customers (both existing and new) using Postgres CDC ClickPipes. Until
+then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
+to review and optimize their costs if needed, although we expect most will not need
+to make any changes.
+:::
+
+$^*$ _For example, the external ETL tool Airbyte, which offers similar CDC capabilities,
+charges $10/GB (excluding credits)—more than 20 times the cost of Postgres CDC in
+ClickPipes for moving 1TB of data._
+
+#### Pricing dimensions {#pricing-dimensions}
+
+There are two main dimensions to pricing:
+
+1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
+ ingested into ClickHouse.
+2. **Compute**: The compute units provisioned per service manage multiple
+ Postgres CDC ClickPipes and are separate from the compute units used by the
+ ClickHouse Cloud service. This additional compute is dedicated specifically
+ to Postgres CDC ClickPipes. Compute is billed at the service level, not per
+ individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
+
+#### Ingested data {#ingested-data}
+
+The Postgres CDC connector operates in two main phases:
+
+- **Initial Load / Resync**: This captures a full snapshot of Postgres tables
+ and occurs when a pipe is first created or re-synced.
+- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
+ updates, deletes, and schema changes—from Postgres to ClickHouse.
+
+In most use cases, continuous replication accounts for over 90% of a ClickPipe's
+lifecycle. Because initial loads involve transferring a large volume of data all
+at once, we offer a lower rate for that phase.
+
+| Phase | Cost |
+|----------------------------------|--------------|
+| **Initial Load / Resync** | $0.10 per GB |
+| **Continuous Replication (CDC)** | $0.20 per GB |
+
+#### Compute {#compute}
+
+This dimension covers the compute units provisioned per service just for Postgres
+ClickPipes. Compute is shared across all Postgres pipes within a service. **It
+is provisioned when the first Postgres pipe is created and deallocated when no
+Postgres CDC pipes remain**. The amount of compute provisioned depends on your
+organization’s tier:
+
+| Tier | Cost |
+|------------------------------|-----------------------------------------------|
+| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
+| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
+
+
+### ClickPipes for Streaming and Object Storage {#clickpipes-for-streaming-object-storage}
+
+This section outlines the pricing model of ClickPipes for streaming and object storage.
+
+#### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
It consists of two dimensions
- **Compute**: Price per unit per hour
- Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
- It applies to all ClickPipes types.
+ Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
+ It applies to all ClickPipes types.
- **Ingested data**: per GB pricing
- The ingested data rate applies to all streaming ClickPipes
- (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
- for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
+ The ingested data rate applies to all streaming ClickPipes
+ (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
+ for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
-### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
+#### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
-ClickPipes ingests data from remote data sources via a dedicated infrastructure
-that runs and scales independently of the ClickHouse Cloud service.
+ClickPipes ingests data from remote data sources via a dedicated infrastructure
+that runs and scales independently of the ClickHouse Cloud service.
For this reason, it uses dedicated compute replicas.
-### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
+#### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
-Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
+Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
-### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
+#### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
- Compute: \$0.20 per unit per hour (\$0.05 per replica per hour)
- Ingested data: \$0.04 per GB
-### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
+#### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
The following examples assume a single replica unless explicitly mentioned.
@@ -409,6 +484,101 @@ The following examples assume a single replica unless explicitly mentioned.
-$^1$ _Only ClickPipes compute for orchestration,
+$^1$ _Only ClickPipes compute for orchestration,
effective data transfer is assumed by the underlying Clickhouse Service_
+## ClickPipes pricing FAQ {#clickpipes-pricing-faq}
+
+Below, you will find frequently asked questions about CDC ClickPipes and streaming
+and object-based storage ClickPipes.
+
+### FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe}
+
+
+
+Is the ingested data measured in pricing based on compressed or uncompressed size?
+
+The ingested data is measured as _uncompressed data_ coming from Postgres—both
+during the initial load and CDC (via the replication slot). Postgres does not
+compress data during transit by default, and ClickPipe processes the raw,
+uncompressed bytes.
+
+
+
+
+
+When will Postgres CDC pricing start appearing on my bills?
+
+Postgres CDC ClickPipes pricing will begin appearing on monthly bills starting
+**September 1st, 2025**, for all customers—both existing and new. Until then,
+usage is free. Customers have a **3-month window** starting from **May 29**
+(the GA announcement date) to review and optimize their usage if needed, although
+we expect most won’t need to make any changes.
+
+
+
+
+
+Will I be charged if I pause my pipes?
+
+No data ingestion charges apply while a pipe is paused, since no data is moved.
+However, compute charges still apply—either 0.5 or 1 compute unit—based on your
+organization’s tier. This is a fixed service-level cost and applies across all
+pipes within that service.
+
+
+
+
+
+How can I estimate my pricing?
+
+The Overview page in ClickPipes provides metrics for both initial load/resync and
+CDC data volumes. You can estimate your Postgres CDC costs using these metrics
+in conjunction with the ClickPipes pricing.
+
+
+
+
+
+Can I scale the compute allocated for Postgres CDC in my service?
+
+By default, compute scaling is not user-configurable. The provisioned resources
+are optimized to handle most customer workloads optimally. If your use case
+requires more or less compute, please open a support ticket so we can evaluate
+your request.
+
+
+
+
+
+What is the pricing granularity?
+
+- **Compute**: Billed per hour. Partial hours are rounded up to the next hour.
+- **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data.
+
+
+
+
+
+Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?
+
+Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any
+platform credits you have will automatically apply to ClickPipes usage as well.
+
+
+
+
+
+How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?
+
+The cost varies based on your use case, data volume, and organization tier.
+That said, most existing customers see an increase of **0–15%** relative to their
+existing monthly ClickHouse Cloud spend post trial. Actual costs may vary
+depending on your workload—some workloads involve high data volumes with
+lesser processing, while others require more processing with less data.
+
+
+
+### FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
+
+
\ No newline at end of file
diff --git a/docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md b/docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md
new file mode 100644
index 00000000000..dac3004d5b2
--- /dev/null
+++ b/docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md
@@ -0,0 +1,144 @@
+import Image from '@theme/IdealImage';
+import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_1.png';
+import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
+import clickpipesPricingFaq3 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_3.png';
+
+
+
+Why are we introducing a pricing model for ClickPipes now?
+
+We decided to initially launch ClickPipes for free with the idea to gather feedback, refine features,
+and ensure it meets user needs.
+As the GA platform has grown and effectively stood the test of time by moving trillions of rows,
+introducing a pricing model allows us to continue improving the service,
+maintaining the infrastructure, and providing dedicated support and new connectors.
+
+
+
+
+
+What are ClickPipes replicas?
+
+ClickPipes ingests data from remote data sources via a dedicated infrastructure
+that runs and scales independently of the ClickHouse Cloud service.
+For this reason, it uses dedicated compute replicas.
+The diagrams below show a simplified architecture.
+
+For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker),
+pull the data, process and ingest it into the destination ClickHouse service.
+
+
+
+In the case of object storage ClickPipes,
+the ClickPipes replica orchestrates the data loading task
+(identifying files to copy, maintaining the state, and moving partitions),
+while the data is pulled directly from the ClickHouse service.
+
+
+
+
+
+
+
+What is the default number of replicas and their size?
+
+Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
+This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
+
+
+
+
+
+Can ClickPipes replicas be scaled?
+
+Currently, only ClickPipes for streaming can be scaled horizontally
+by adding more replicas each with a base unit of **0.25** ClickHouse compute units.
+Vertical scaling is also available on demand for specific use cases (adding more CPU and RAM per replica).
+
+
+
+
+
+How many ClickPipes replicas do I need?
+
+It depends on the workload throughput and latency requirements.
+We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed.
+Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly.
+The scaling controls are available under "settings" for each streaming ClickPipe.
+
+
+
+
+
+
+
+What does the ClickPipes pricing structure look like?
+
+It consists of two dimensions:
+- **Compute**: Price per unit per hour
+ Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
+ It applies to all ClickPipes types.
+- **Ingested data**: per GB pricing
+ The ingested data rate applies to all streaming ClickPipes
+ (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream,
+ Azure Event Hubs) for the data transferred via the replica pods.
+ The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
+
+
+
+
+
+What are the ClickPipes public prices?
+
+- Compute: \$0.20 per unit per hour ($0.05 per replica per hour)
+- Ingested data: $0.04 per GB
+
+
+
+
+
+How does it look in an illustrative example?
+
+For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.25 compute unit) will cost:
+
+$$
+(0.25 \times 0.20 \times 24) + (0.04 \times 1000) = \$41.2
+$$
+
+
+For object storage connectors (S3 and GCS),
+only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data
+but only orchestrating the transfer which is operated by the underlying ClickHouse service:
+
+$$
+0.25 \times 0,20 \times 24 = \$1.2
+$$
+
+
+
+
+
+When does the new pricing model take effect?
+
+The new pricing model will take effect for all organizations created after January 27th, 2025.
+
+
+
+
+
+What happens to current users?
+
+Existing users will have a **60-day grace period** where the ClickPipes service continues to be offered for free.
+Billing will automatically start for ClickPipes for existing users on **March 24th, 2025.**
+
+
+
+
+
+How does ClickPipes pricing compare to the market?
+
+The philosophy behind ClickPipes pricing is
+to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud.
+From that angle, our market analysis revealed that we are positioned competitively.
+
+
\ No newline at end of file
diff --git a/docs/cloud/manage/jan2025_faq/dimensions.md b/docs/cloud/manage/jan2025_faq/dimensions.md
index d375d337663..dbba4ed1c1c 100644
--- a/docs/cloud/manage/jan2025_faq/dimensions.md
+++ b/docs/cloud/manage/jan2025_faq/dimensions.md
@@ -4,13 +4,12 @@ slug: /cloud/manage/jan-2025-faq/pricing-dimensions
keywords: ['new pricing', 'dimensions']
description: 'Pricing dimensions for data transfer and ClickPipes'
---
-
import Image from '@theme/IdealImage';
import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_1.png';
import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
import clickpipesPricingFaq3 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_3.png';
import NetworkPricing from '@site/docs/cloud/manage/_snippets/_network_transfer_rates.md';
-
+import ClickPipesFAQ from './_snippets/_clickpipes_faq.md'
The following dimensions have been added to the new ClickHouse Cloud pricing.
@@ -35,98 +34,4 @@ Data transfer prices will **not** be tiered as usage increases. Note that the pr
## ClickPipes Pricing FAQ {#clickpipes-pricing-faq}
-### Why are we introducing a pricing model for ClickPipes now? {#why-are-we-introducing-a-pricing-model-for-clickpipes-now}
-
-We decided to initially launch ClickPipes for free with the idea to gather feedback, refine features,
-and ensure it meets user needs.
-As the GA platform has grown and effectively stood the test of time by moving trillions of rows,
-introducing a pricing model allows us to continue improving the service,
-maintaining the infrastructure, and providing dedicated support and new connectors.
-
-### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
-
-ClickPipes ingests data from remote data sources via a dedicated infrastructure
-that runs and scales independently of the ClickHouse Cloud service.
-For this reason, it uses dedicated compute replicas.
-The diagrams below show a simplified architecture.
-
-For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker),
-pull the data, process and ingest it into the destination ClickHouse service.
-
-
-
-In the case of object storage ClickPipes,
-the ClickPipes replica orchestrates the data loading task
-(identifying files to copy, maintaining the state, and moving partitions),
-while the data is pulled directly from the ClickHouse service.
-
-
-
-### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
-
-Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
-This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
-
-### Can ClickPipes replicas be scaled? {#can-clickpipes-replicas-be-scaled}
-
-Currently, only ClickPipes for streaming can be scaled horizontally
-by adding more replicas each with a base unit of **0.25** ClickHouse compute units.
-Vertical scaling is also available on demand for specific use cases (adding more CPU and RAM per replica).
-
-### How many ClickPipes replicas do I need? {#how-many-clickpipes-replicas-do-i-need}
-
-It depends on the workload throughput and latency requirements.
-We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed.
-Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly.
-The scaling controls are available under "settings" for each streaming ClickPipe.
-
-
-
-### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
-
-It consists of two dimensions:
-- **Compute**: Price per unit per hour
- Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
- It applies to all ClickPipes types.
-- **Ingested data**: per GB pricing
- The ingested data rate applies to all streaming ClickPipes
- (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream,
- Azure Event Hubs) for the data transferred via the replica pods.
- The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
-
-### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
-
-- Compute: \$0.20 per unit per hour ($0.05 per replica per hour)
-- Ingested data: $0.04 per GB
-
-### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
-
-For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.25 compute unit) will cost:
-
-$$
-(0.25 \times 0.20 \times 24) + (0.04 \times 1000) = \$41.2
-$$
-
-
-For object storage connectors (S3 and GCS),
-only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data
-but only orchestrating the transfer which is operated by the underlying ClickHouse service:
-
-$$
-0.25 \times 0,20 \times 24 = \$1.2
-$$
-
-### When does the new pricing model take effect? {#when-does-the-new-pricing-model-take-effect}
-
-The new pricing model will take effect for all organizations created after January 27th, 2025.
-
-### What happens to current users? {#what-happens-to-current-users}
-
-Existing users will have a **60-day grace period** where the ClickPipes service continues to be offered for free.
-Billing will automatically start for ClickPipes for existing users on **March 24th, 2025.**
-
-### How does ClickPipes pricing compare to the market? {#how-does-clickpipes-pricing-compare-to-the-market}
-
-The philosophy behind ClickPipes pricing is
-to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud.
-From that angle, our market analysis revealed that we are positioned competitively.
+
From 85ae182200ed5b80789d361fc157e3f05bed18fb Mon Sep 17 00:00:00 2001
From: Shaun Struwig <41984034+Blargian@users.noreply.github.com>
Date: Fri, 23 May 2025 13:10:46 +0200
Subject: [PATCH 2/6] fix aspell
---
scripts/check-doc-aspell | 28 +++++++++++++++++++++++++++-
styles/ClickHouse/Headings.yml | 3 +++
2 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/scripts/check-doc-aspell b/scripts/check-doc-aspell
index 386c181d6db..a922daaf615 100755
--- a/scripts/check-doc-aspell
+++ b/scripts/check-doc-aspell
@@ -67,8 +67,34 @@ get_ignore_words_for_file() {
# Use this to filter out lines we don't wanna consider in spell-check - slugs, imports, and img src JSX elements
preprocess_file() {
local file=$1
- sed -E 's/\{#[^}]*\}//g' "$file" | grep -Ev '^(slug:|import [[:alnum:]_, {}*]+ from .+;?|
]*\/?>//g' \
+ | sed -E 's/]*\/?>//g' \
+ | grep -Ev '(^[[:space:]]*(slug:|
Date: Fri, 23 May 2025 14:29:53 +0200
Subject: [PATCH 3/6] space after frontmatter
---
docs/cloud/manage/jan2025_faq/dimensions.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/docs/cloud/manage/jan2025_faq/dimensions.md b/docs/cloud/manage/jan2025_faq/dimensions.md
index 3a92699ece5..c4dd9268593 100644
--- a/docs/cloud/manage/jan2025_faq/dimensions.md
+++ b/docs/cloud/manage/jan2025_faq/dimensions.md
@@ -4,6 +4,7 @@ slug: /cloud/manage/jan-2025-faq/pricing-dimensions
keywords: ['new pricing', 'dimensions']
description: 'Pricing dimensions for data transfer and ClickPipes'
---
+
import Image from '@theme/IdealImage';
import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_1.png';
import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
From 4add218f4782b5ecfeb208a97db801091205b8ee Mon Sep 17 00:00:00 2001
From: Shaun Struwig <41984034+Blargian@users.noreply.github.com>
Date: Tue, 27 May 2025 09:48:07 +0200
Subject: [PATCH 4/6] add example
---
docs/cloud/manage/billing.md | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/docs/cloud/manage/billing.md b/docs/cloud/manage/billing.md
index fe43f9961d6..a192d707db0 100644
--- a/docs/cloud/manage/billing.md
+++ b/docs/cloud/manage/billing.md
@@ -424,7 +424,33 @@ organization’s tier:
| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
+#### Example {#example}
+Let’s say your service is in Scale tier and has the following setup:
+
+- 2 Postgres ClickPipes running continuous replication
+- Each pipe ingests 500 GB of data changes (CDC) per month
+- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
+
+##### Monthly cost breakdown {#cost-breakdown}
+
+**Ingested Data (CDC)**:
+
+$$ 2 pipes \times 500 GB \eq 1,000 GB per month
+ 1,000 GB \times \$0.20/GB \eq \$200$$
+
+ **Compute**:
+
+ $$1 compute unit \times \$0.20/hr \times 730 hours (approximate month) = \$146$$
+
+:::note
+Compute is shared across both pipes
+:::
+
+**Total Monthly Cost**:
+
+ $$\$200 (ingest) + \$146 (compute) = \$346$$
+
### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
This section outlines the pricing model of ClickPipes for streaming and object storage.
@@ -581,4 +607,4 @@ lesser processing, while others require more processing with less data.
### FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
-
\ No newline at end of file
+
From 3c5f5b000bf7d86782c8fda3ead5274904f89008 Mon Sep 17 00:00:00 2001
From: Shaun Struwig <41984034+Blargian@users.noreply.github.com>
Date: Tue, 27 May 2025 10:04:02 +0200
Subject: [PATCH 5/6] Update billing.md
---
docs/cloud/manage/billing.md | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/docs/cloud/manage/billing.md b/docs/cloud/manage/billing.md
index a192d707db0..b218b8481ee 100644
--- a/docs/cloud/manage/billing.md
+++ b/docs/cloud/manage/billing.md
@@ -435,21 +435,18 @@ Let’s say your service is in Scale tier and has the following setup:
##### Monthly cost breakdown {#cost-breakdown}
**Ingested Data (CDC)**:
+$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
+$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
-$$ 2 pipes \times 500 GB \eq 1,000 GB per month
- 1,000 GB \times \$0.20/GB \eq \$200$$
-
- **Compute**:
-
- $$1 compute unit \times \$0.20/hr \times 730 hours (approximate month) = \$146$$
+**Compute**:
+$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
:::note
Compute is shared across both pipes
:::
**Total Monthly Cost**:
-
- $$\$200 (ingest) + \$146 (compute) = \$346$$
+$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
From d792d531b25135ac9251dd9ce77a04d8e5472b8a Mon Sep 17 00:00:00 2001
From: Shaun Struwig <41984034+Blargian@users.noreply.github.com>
Date: Tue, 27 May 2025 10:22:30 +0200
Subject: [PATCH 6/6] Update billing.md
---
docs/cloud/manage/billing.md | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/docs/cloud/manage/billing.md b/docs/cloud/manage/billing.md
index b218b8481ee..8d608096086 100644
--- a/docs/cloud/manage/billing.md
+++ b/docs/cloud/manage/billing.md
@@ -434,18 +434,22 @@ Let’s say your service is in Scale tier and has the following setup:
##### Monthly cost breakdown {#cost-breakdown}
-**Ingested Data (CDC)**:
-$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
+**Ingested Data (CDC)**:
+
+$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
+
$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
-**Compute**:
+**Compute**:
+
$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
:::note
Compute is shared across both pipes
:::
-**Total Monthly Cost**:
+**Total Monthly Cost**:
+
$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}