diff --git a/docs/cloud/manage/billing.md b/docs/cloud/manage/billing.md
index f51fe1c1f6b..8d608096086 100644
--- a/docs/cloud/manage/billing.md
+++ b/docs/cloud/manage/billing.md
@@ -5,11 +5,13 @@ title: 'Pricing'
description: 'Overview page for ClickHouse Cloud pricing'
---
+import ClickPipesFAQ from './jan2025_faq/_snippets/_clickpipes_faq.md'
+
For pricing information, see the [ClickHouse Cloud Pricing](https://clickhouse.com/pricing#pricing-calculator) page.
ClickHouse Cloud bills based on the usage of compute, storage, [data transfer](/cloud/manage/network-data-transfer) (egress over the internet and cross-region), and [ClickPipes](/integrations/clickpipes).
To understand what can affect your bill, and ways that you can manage your spend, keep reading.
-## Amazon Web Services (AWS) Example {#amazon-web-services-aws-example}
+## Amazon Web Services (AWS) example {#amazon-web-services-aws-example}
:::note
- Prices reflect AWS us-east-1 pricing.
@@ -70,7 +72,7 @@ Pricing breakdown for this example:
-### Scale (Always-on, Auto-scaling): From $499.38 per month {#scale-always-on-auto-scaling-from-49938-per-month}
+### Scale (always-on, auto-scaling): from $499.38 per month {#scale-always-on-auto-scaling-from-49938-per-month}
Best for: workloads requiring enhanced SLAs (2+ replica services), scalability, and advanced security.
@@ -98,9 +100,9 @@ Pricing breakdown for this example:
| Storage |
- 1TB of data + 1 backup \$50.60 |
- 2TB of data + 1 backup \$101.20 |
- 3TB of data + 1 backup \$151.80 |
+ 1 TB of data + 1 backup \$50.60 |
+ 2 TB of data + 1 backup \$101.20 |
+ 3 TB of data + 1 backup \$151.80 |
| Public internet egress data transfer |
@@ -148,9 +150,9 @@ Best for: large scale, mission critical deployments that have stringent security
| Storage |
- 5TB + 1 backup \$253.00 |
- 10TB + 1 backup \$506.00 |
- 20TB + 1 backup \$1,012.00 |
+ 5 TB + 1 backup \$253.00 |
+ 10 TB + 1 backup \$506.00 |
+ 20 TB + 1 backup \$1,012.00 |
| Public internet egress data transfer |
@@ -173,7 +175,7 @@ Best for: large scale, mission critical deployments that have stringent security
-## FAQs {#faqs}
+## Frequently Asked Questions {#faqs}
### How is compute metered? {#how-is-compute-metered}
@@ -193,8 +195,8 @@ Users who need additional backups can do so by configuring additional [backups](
### How do I estimate compression? {#how-do-i-estimate-compression}
-Compression can vary quite a bit by dataset.
-It is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields),
+Compression can vary from dataset to dataset.
+How much it varies is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields),
and how the user sets up the schema (using optional codecs or not, for instance).
It can be on the order of 10x for common types of analytical data, but it can be significantly lower or higher as well.
See the [optimizing documentation](/optimize/asynchronous-inserts) for guidance and this [Uber blog](https://www.uber.com/blog/logging/) for a detailed logging use case example.
@@ -227,7 +229,7 @@ Billing follows a monthly billing cycle and the start date is tracked as the dat
### What controls does ClickHouse Cloud offer to manage costs for Scale and Enterprise services? {#what-controls-does-clickhouse-cloud-offer-to-manage-costs-for-scale-and-enterprise-services}
-- Trial and Annual Commit customers will be notified automatically by email when their consumption hits certain thresholds: `50%`, `75%`, and `90%`. This allows users to proactively manage their usage.
+- Trial and Annual Commit customers are notified automatically by email when their consumption hits certain thresholds: `50%`, `75%`, and `90%`. This allows users to proactively manage their usage.
- ClickHouse Cloud allows users to set a maximum auto-scaling limit on their compute via [Advanced scaling control](/manage/scaling), a significant cost factor for analytical workloads.
- The [Advanced scaling control](/manage/scaling) lets you set memory limits with an option to control the behavior of pausing/idling during inactivity.
@@ -251,13 +253,13 @@ The ClickHouse Cloud console provides a Usage display that details usage per ser
### How do I access my invoice for my marketplace subscription to the ClickHouse Cloud service? {#how-do-i-access-my-invoice-for-my-marketplace-subscription-to-the-clickhouse-cloud-service}
-All marketplace subscriptions will be billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
+All marketplace subscriptions are billed and invoiced by the marketplace. You can view your invoice through the respective cloud provider marketplace directly.
### Why do the dates on the Usage statements not match my Marketplace Invoice? {#why-do-the-dates-on-the-usage-statements-not-match-my-marketplace-invoice}
AWS Marketplace billing follows the calendar month cycle.
For example, for usage between dates 01-Dec-2024 and 01-Jan-2025,
-an invoice will be generated between 3-Jan and 5-Jan-2025
+an invoice is generated between 3-Jan and 5-Jan-2025
ClickHouse Cloud usage statements follow a different billing cycle where usage is metered
and reported over 30 days starting from the day of sign up.
@@ -352,37 +354,137 @@ Cost estimation (per month) for this example on the **Scale Tier**:
Without warehouses, you would have to pay for the amount of memory that the data engineer needs for his queries.
However, combining two services in a warehouse and idling one of them helps you save money.
-## ClickPipes Pricing {#clickpipes-pricing}
+## ClickPipes pricing {#clickpipes-pricing}
+
+### ClickPipes for Postgres CDC {#clickpipes-for-postgres-cdc}
+
+This section outlines the pricing model for our Postgres Change Data Capture (CDC)
+connector in ClickPipes. In designing this model, our goal was to keep pricing
+highly competitive while staying true to our core vision:
+
+> Making it seamless and
+affordable for customers to move data from Postgres to ClickHouse for
+real-time analytics.
+
+The connector is over **5x more cost-effective** than external
+ETL tools and similar features in other database platforms. $^*$
+
+:::note
+Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
+for all customers (both existing and new) using Postgres CDC ClickPipes. Until
+then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
+to review and optimize their costs if needed, although we expect most will not need
+to make any changes.
+:::
+
+$^*$ _For example, the external ETL tool Airbyte, which offers similar CDC capabilities,
+charges $10/GB (excluding credits)—more than 20 times the cost of Postgres CDC in
+ClickPipes for moving 1TB of data._
+
+#### Pricing dimensions {#pricing-dimensions}
+
+There are two main dimensions to pricing:
+
+1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
+ ingested into ClickHouse.
+2. **Compute**: The compute units provisioned per service manage multiple
+ Postgres CDC ClickPipes and are separate from the compute units used by the
+ ClickHouse Cloud service. This additional compute is dedicated specifically
+ to Postgres CDC ClickPipes. Compute is billed at the service level, not per
+ individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
+
+#### Ingested data {#ingested-data}
+
+The Postgres CDC connector operates in two main phases:
+
+- **Initial load / resync**: This captures a full snapshot of Postgres tables
+ and occurs when a pipe is first created or re-synced.
+- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
+ updates, deletes, and schema changes—from Postgres to ClickHouse.
+
+In most use cases, continuous replication accounts for over 90% of a ClickPipe
+life cycle. Because initial loads involve transferring a large volume of data all
+at once, we offer a lower rate for that phase.
+
+| Phase | Cost |
+|----------------------------------|--------------|
+| **Initial load / resync** | $0.10 per GB |
+| **Continuous Replication (CDC)** | $0.20 per GB |
+
+#### Compute {#compute}
-### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
+This dimension covers the compute units provisioned per service just for Postgres
+ClickPipes. Compute is shared across all Postgres pipes within a service. **It
+is provisioned when the first Postgres pipe is created and deallocated when no
+Postgres CDC pipes remain**. The amount of compute provisioned depends on your
+organization’s tier:
+
+| Tier | Cost |
+|------------------------------|-----------------------------------------------|
+| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
+| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
+
+#### Example {#example}
+
+Let’s say your service is in Scale tier and has the following setup:
+
+- 2 Postgres ClickPipes running continuous replication
+- Each pipe ingests 500 GB of data changes (CDC) per month
+- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
+
+##### Monthly cost breakdown {#cost-breakdown}
+
+**Ingested Data (CDC)**:
+
+$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
+
+$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
+
+**Compute**:
+
+$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
+
+:::note
+Compute is shared across both pipes
+:::
+
+**Total Monthly Cost**:
+
+$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
+
+### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
+
+This section outlines the pricing model of ClickPipes for streaming and object storage.
+
+#### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
It consists of two dimensions
- **Compute**: Price per unit per hour
- Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
- It applies to all ClickPipes types.
+ Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
+ It applies to all ClickPipes types.
- **Ingested data**: per GB pricing
- The ingested data rate applies to all streaming ClickPipes
- (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
- for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
+ The ingested data rate applies to all streaming ClickPipes
+ (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
+ for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
-### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
+#### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
-ClickPipes ingests data from remote data sources via a dedicated infrastructure
-that runs and scales independently of the ClickHouse Cloud service.
+ClickPipes ingests data from remote data sources via a dedicated infrastructure
+that runs and scales independently of the ClickHouse Cloud service.
For this reason, it uses dedicated compute replicas.
-### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
+#### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
-Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
+Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
-### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
+#### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
- Compute: \$0.20 per unit per hour (\$0.05 per replica per hour)
- Ingested data: \$0.04 per GB
-### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
+#### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
The following examples assume a single replica unless explicitly mentioned.
@@ -409,6 +511,101 @@ The following examples assume a single replica unless explicitly mentioned.
-$^1$ _Only ClickPipes compute for orchestration,
+$^1$ _Only ClickPipes compute for orchestration,
effective data transfer is assumed by the underlying Clickhouse Service_
+## ClickPipes pricing FAQ {#clickpipes-pricing-faq}
+
+Below, you will find frequently asked questions about CDC ClickPipes and streaming
+and object-based storage ClickPipes.
+
+### FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe}
+
+
+
+Is the ingested data measured in pricing based on compressed or uncompressed size?
+
+The ingested data is measured as _uncompressed data_ coming from Postgres—both
+during the initial load and CDC (via the replication slot). Postgres does not
+compress data during transit by default, and ClickPipe processes the raw,
+uncompressed bytes.
+
+
+
+
+
+When will Postgres CDC pricing start appearing on my bills?
+
+Postgres CDC ClickPipes pricing begins appearing on monthly bills starting
+**September 1st, 2025**, for all customers—both existing and new. Until then,
+usage is free. Customers have a **3-month window** starting from **May 29**
+(the GA announcement date) to review and optimize their usage if needed, although
+we expect most won’t need to make any changes.
+
+
+
+
+
+Will I be charged if I pause my pipes?
+
+No data ingestion charges apply while a pipe is paused, since no data is moved.
+However, compute charges still apply—either 0.5 or 1 compute unit—based on your
+organization’s tier. This is a fixed service-level cost and applies across all
+pipes within that service.
+
+
+
+
+
+How can I estimate my pricing?
+
+The Overview page in ClickPipes provides metrics for both initial load/resync and
+CDC data volumes. You can estimate your Postgres CDC costs using these metrics
+in conjunction with the ClickPipes pricing.
+
+
+
+
+
+Can I scale the compute allocated for Postgres CDC in my service?
+
+By default, compute scaling is not user-configurable. The provisioned resources
+are optimized to handle most customer workloads optimally. If your use case
+requires more or less compute, please open a support ticket so we can evaluate
+your request.
+
+
+
+
+
+What is the pricing granularity?
+
+- **Compute**: Billed per hour. Partial hours are rounded up to the next hour.
+- **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data.
+
+
+
+
+
+Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?
+
+Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any
+platform credits you have will automatically apply to ClickPipes usage as well.
+
+
+
+
+
+How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?
+
+The cost varies based on your use case, data volume, and organization tier.
+That said, most existing customers see an increase of **0–15%** relative to their
+existing monthly ClickHouse Cloud spend post trial. Actual costs may vary
+depending on your workload—some workloads involve high data volumes with
+lesser processing, while others require more processing with less data.
+
+
+
+### FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
+
+
diff --git a/docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md b/docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md
new file mode 100644
index 00000000000..215982c8b11
--- /dev/null
+++ b/docs/cloud/manage/jan2025_faq/_snippets/_clickpipes_faq.md
@@ -0,0 +1,145 @@
+import Image from '@theme/IdealImage';
+import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_1.png';
+import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
+import clickpipesPricingFaq3 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_3.png';
+
+
+
+Why are we introducing a pricing model for ClickPipes now?
+
+We decided to initially launch ClickPipes for free with the idea to gather
+feedback, refine features, and ensure it meets user needs.
+As the GA platform has grown, it has effectively stood the test of time by
+moving trillions of rows. Introducing a pricing model allows us to continue
+improving the service, maintaining the infrastructure, and providing dedicated
+support and new connectors.
+
+
+
+
+
+What are ClickPipes replicas?
+
+ClickPipes ingests data from remote data sources via a dedicated infrastructure
+that runs and scales independently of the ClickHouse Cloud service.
+For this reason, it uses dedicated compute replicas.
+The diagrams below show a simplified architecture.
+
+For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker),
+pull the data, process and ingest it into the destination ClickHouse service.
+
+
+
+In the case of object storage ClickPipes,
+the ClickPipes replica orchestrates the data loading task
+(identifying files to copy, maintaining the state, and moving partitions),
+while the data is pulled directly from the ClickHouse service.
+
+
+
+
+
+
+
+What's the default number of replicas and their size?
+
+Each ClickPipe defaults to 1 replica that's provided with 2 GiB of RAM and 0.5 vCPU.
+This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
+
+
+
+
+
+Can ClickPipes replicas be scaled?
+
+ClickPipes for streaming can be scaled horizontally
+by adding more replicas each with a base unit of **0.25** ClickHouse compute units.
+Vertical scaling is also available on demand for specific use cases (adding more CPU and RAM per replica).
+
+
+
+
+
+How many ClickPipes replicas do I need?
+
+It depends on the workload throughput and latency requirements.
+We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed.
+Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly.
+The scaling controls are available under "settings" for each streaming ClickPipe.
+
+
+
+
+
+
+
+What does the ClickPipes pricing structure look like?
+
+It consists of two dimensions:
+- **Compute**: Price per unit per hour
+ Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
+ It applies to all ClickPipes types.
+- **Ingested data**: per GB pricing
+ The ingested data rate applies to all streaming ClickPipes
+ (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream,
+ Azure Event Hubs) for the data transferred via the replica pods.
+ The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
+
+
+
+
+
+What are the ClickPipes public prices?
+
+- Compute: \$0.20 per unit per hour ($0.05 per replica per hour)
+- Ingested data: $0.04 per GB
+
+
+
+
+
+How does it look in an illustrative example?
+
+For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.25 compute unit) costs:
+
+$$
+(0.25 \times 0.20 \times 24) + (0.04 \times 1000) = \$41.2
+$$
+
+
+For object storage connectors (S3 and GCS),
+only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data
+but only orchestrating the transfer which is operated by the underlying ClickHouse service:
+
+$$
+0.25 \times 0,20 \times 24 = \$1.2
+$$
+
+
+
+
+
+When does the new pricing model take effect?
+
+The new pricing model takes effect for all organizations created after January 27th, 2025.
+
+
+
+
+
+What happens to current users?
+
+Existing users will have a **60-day grace period** where the ClickPipes service continues to be offered for free.
+Billing will automatically start for ClickPipes for existing users on **March 24th, 2025.**
+
+
+
+
+
+How does ClickPipes pricing compare to the market?
+
+The philosophy behind ClickPipes pricing is
+to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud.
+From that angle, our market analysis revealed that we are positioned competitively.
+
+
\ No newline at end of file
diff --git a/docs/cloud/manage/jan2025_faq/dimensions.md b/docs/cloud/manage/jan2025_faq/dimensions.md
index d375d337663..c4dd9268593 100644
--- a/docs/cloud/manage/jan2025_faq/dimensions.md
+++ b/docs/cloud/manage/jan2025_faq/dimensions.md
@@ -10,123 +10,29 @@ import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/
import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
import clickpipesPricingFaq3 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_3.png';
import NetworkPricing from '@site/docs/cloud/manage/_snippets/_network_transfer_rates.md';
-
+import ClickPipesFAQ from './_snippets/_clickpipes_faq.md'
The following dimensions have been added to the new ClickHouse Cloud pricing.
:::note
-Data transfer and ClickPipes pricing will not apply to legacy plans, i.e. Development, Production, and Dedicated, until 24 March 2025.
+Data transfer and ClickPipes pricing doesn't apply to legacy plans, i.e. Development, Production, and Dedicated, until 24 March 2025.
:::
-## Data Transfer Pricing {#data-transfer-pricing}
+## Data transfer pricing {#data-transfer-pricing}
### How are users charged for data transfer, and will this vary across organization tiers and regions? {#how-are-users-charged-for-data-transfer-and-will-this-vary-across-organization-tiers-and-regions}
-- Users will pay for data transfer along two dimensions — public internet egress and inter-region egress. There are no charges for intra-region data transfer or Private Link/Private Service Connect use and data transfer. However, we reserve the right to implement additional data transfer pricing dimensions if we see usage patterns that impact our ability to charge users appropriately.
-- Data transfer pricing will vary by Cloud Service Provider (CSP) and region.
-- Data transfer pricing will **not** vary between organizational tiers.
+- Users pay for data transfer along two dimensions — public internet egress and inter-region egress. There are no charges for intra-region data transfer or Private Link/Private Service Connect use and data transfer. However, we reserve the right to implement additional data transfer pricing dimensions if we see usage patterns that impact our ability to charge users appropriately.
+- Data transfer pricing varies by Cloud Service Provider (CSP) and region.
+- Data transfer pricing does **not** vary between organizational tiers.
- Public egress pricing is based only on the origin region. Inter-region (or cross-region) pricing depends on both the origin and destination regions.
### Will data transfer pricing be tiered as usage increases? {#will-data-transfer-pricing-be-tiered-as-usage-increases}
-Data transfer prices will **not** be tiered as usage increases. Note that the pricing varies by region and cloud service provider.
-
-## ClickPipes Pricing FAQ {#clickpipes-pricing-faq}
-
-### Why are we introducing a pricing model for ClickPipes now? {#why-are-we-introducing-a-pricing-model-for-clickpipes-now}
-
-We decided to initially launch ClickPipes for free with the idea to gather feedback, refine features,
-and ensure it meets user needs.
-As the GA platform has grown and effectively stood the test of time by moving trillions of rows,
-introducing a pricing model allows us to continue improving the service,
-maintaining the infrastructure, and providing dedicated support and new connectors.
-
-### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
-
-ClickPipes ingests data from remote data sources via a dedicated infrastructure
-that runs and scales independently of the ClickHouse Cloud service.
-For this reason, it uses dedicated compute replicas.
-The diagrams below show a simplified architecture.
-
-For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker),
-pull the data, process and ingest it into the destination ClickHouse service.
-
-
-
-In the case of object storage ClickPipes,
-the ClickPipes replica orchestrates the data loading task
-(identifying files to copy, maintaining the state, and moving partitions),
-while the data is pulled directly from the ClickHouse service.
-
-
-
-### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
-
-Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
-This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
-
-### Can ClickPipes replicas be scaled? {#can-clickpipes-replicas-be-scaled}
-
-Currently, only ClickPipes for streaming can be scaled horizontally
-by adding more replicas each with a base unit of **0.25** ClickHouse compute units.
-Vertical scaling is also available on demand for specific use cases (adding more CPU and RAM per replica).
-
-### How many ClickPipes replicas do I need? {#how-many-clickpipes-replicas-do-i-need}
-
-It depends on the workload throughput and latency requirements.
-We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed.
-Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly.
-The scaling controls are available under "settings" for each streaming ClickPipe.
-
-
-
-### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
-
-It consists of two dimensions:
-- **Compute**: Price per unit per hour
- Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
- It applies to all ClickPipes types.
-- **Ingested data**: per GB pricing
- The ingested data rate applies to all streaming ClickPipes
- (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream,
- Azure Event Hubs) for the data transferred via the replica pods.
- The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
-
-### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
-
-- Compute: \$0.20 per unit per hour ($0.05 per replica per hour)
-- Ingested data: $0.04 per GB
-
-### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
-
-For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.25 compute unit) will cost:
-
-$$
-(0.25 \times 0.20 \times 24) + (0.04 \times 1000) = \$41.2
-$$
-
-
-For object storage connectors (S3 and GCS),
-only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data
-but only orchestrating the transfer which is operated by the underlying ClickHouse service:
-
-$$
-0.25 \times 0,20 \times 24 = \$1.2
-$$
-
-### When does the new pricing model take effect? {#when-does-the-new-pricing-model-take-effect}
-
-The new pricing model will take effect for all organizations created after January 27th, 2025.
-
-### What happens to current users? {#what-happens-to-current-users}
-
-Existing users will have a **60-day grace period** where the ClickPipes service continues to be offered for free.
-Billing will automatically start for ClickPipes for existing users on **March 24th, 2025.**
+Data transfer prices will **not** be tiered as usage increases. Pricing varies by region and cloud service provider.
-### How does ClickPipes pricing compare to the market? {#how-does-clickpipes-pricing-compare-to-the-market}
+## ClickPipes pricing FAQ {#clickpipes-pricing-faq}
-The philosophy behind ClickPipes pricing is
-to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud.
-From that angle, our market analysis revealed that we are positioned competitively.
+
diff --git a/scripts/check-doc-aspell b/scripts/check-doc-aspell
index da2492638db..a922daaf615 100755
--- a/scripts/check-doc-aspell
+++ b/scripts/check-doc-aspell
@@ -94,6 +94,7 @@ preprocess_file() {
| sed -E 's/]*\/?>//g' \
| grep -Ev '(^[[:space:]]*(slug:||^import)