diff --git a/README.md b/README.md index fceeaaa0..b8b17c66 100644 --- a/README.md +++ b/README.md @@ -2,24 +2,26 @@

- - - Supabase Logo + Supabase Logo -

Supabase ETL

+

ETL

- A Rust crate to quickly build replication solutions for Postgres. Build data pipelines which continually copy data from Postgres to other systems. + Build real-time Postgres replication applications in Rust
- Examples + Documentation + · + Examples + · + Issues

-# ETL +**ETL** is a Rust framework by [Supabase](https://supabase.com) that enables you to build high-performance, real-time data replication applications for PostgreSQL. Whether you're creating ETL pipelines, implementing CDC (Change Data Capture), or building custom data synchronization solutions, ETL provides the building blocks you need. -This crate builds abstractions on top of Postgres's [logical streaming replication protocol](https://www.postgresql.org/docs/current/protocol-logical-replication.html) and pushes users towards the pit of success without letting them worry about low level details of the protocol. +Built on top of PostgreSQL's [logical streaming replication protocol](https://www.postgresql.org/docs/current/protocol-logical-replication.html), ETL handles the low-level complexities of database replication while providing a clean, Rust-native API that guides you towards the pit of success. ## Table of Contents @@ -35,27 +37,88 @@ This crate builds abstractions on top of Postgres's [logical streaming replicati ## Features -The `etl` crate supports the following destinations: +**Core Capabilities:** +- 🚀 **Real-time replication**: Stream changes from PostgreSQL as they happen +- 🔄 **Multiple destinations**: Support for various data warehouses and databases (coming soon) +- 🛡️ **Fault tolerance**: Built-in error handling, retries, and recovery mechanisms +- ⚡ **High performance**: Efficient batching and parallel processing +- 🔧 **Extensible**: Plugin architecture for custom destinations -- [x] BigQuery -- [ ] Apache Iceberg (planned) -- [ ] DuckDB (planned) +**Supported Destinations:** +- [x] **BigQuery** - Google Cloud's data warehouse +- [ ] **Apache Iceberg** (planned) - Open table format for analytics +- [ ] **DuckDB** (planned) - In-process analytical database ## Installation -To use `etl` in your Rust project, add the core library and desired destinations via git dependencies in `Cargo.toml`: +Add ETL to your Rust project via git dependencies in `Cargo.toml`: ```toml [dependencies] etl = { git = "https://github.com/supabase/etl" } -etl-destinations = { git = "https://github.com/supabase/etl", features = ["bigquery"] } ``` -The `etl` crate provides the core replication functionality, while `etl-destinations` contains destination-specific implementations. Each destination is behind a feature of the same name in the `etl-destinations` crate. The git dependency is needed for now because the crates are not yet published on crates.io. +> **Note**: ETL is currently distributed via Git while we prepare for the initial crates.io release. ## Quickstart -To quickly get started with `etl`, see the [etl-examples](etl-examples/README.md) crate which contains practical examples and detailed setup instructions. +Get up and running with ETL in minutes using the built-in memory destination: + +```rust +use etl::config::{BatchConfig, PgConnectionConfig, PipelineConfig, TlsConfig}; +use etl::pipeline::Pipeline; +use etl::destination::memory::MemoryDestination; +use etl::store::both::memory::MemoryStore; + +#[tokio::main] +async fn main() -> Result<(), Box> { + // Configure PostgreSQL connection + let pg_connection_config = PgConnectionConfig { + host: "localhost".to_string(), + port: 5432, + name: "mydb".to_string(), + username: "postgres".to_string(), + password: Some("password".into()), + tls: TlsConfig { + trusted_root_certs: String::new(), + enabled: false, + }, + }; + + // Configure the pipeline + let pipeline_config = PipelineConfig { + id: 1, + publication_name: "my_publication".to_string(), + pg_connection: pg_connection_config, + batch: BatchConfig { + max_size: 1000, + max_fill_ms: 5000, + }, + table_error_retry_delay_ms: 10000, + max_table_sync_workers: 4, + }; + + // Create in-memory store and destination for testing + let store = MemoryStore::new(); + let destination = MemoryDestination::new(); + + // Create and start the pipeline + let mut pipeline = Pipeline::new(1, pipeline_config, store, destination); + pipeline.start().await?; + + Ok(()) +} +``` + +**Need production destinations?** Add the `etl-destinations` crate with specific features: + +```toml +[dependencies] +etl = { git = "https://github.com/supabase/etl" } +etl-destinations = { git = "https://github.com/supabase/etl", features = ["bigquery"] } +``` + +For comprehensive examples and tutorials, visit the [etl-examples](etl-examples/README.md) crate and our [documentation](https://supabase.github.io/etl). ## Database Setup @@ -110,3 +173,9 @@ This limits performance for large tables. We plan to address this once the ETL s ## License Distributed under the Apache-2.0 License. See `LICENSE` for more information. + +--- + +

+ Made with ❤️ by the Supabase team +

diff --git a/docs/assets/etl.png b/docs/assets/etl.png new file mode 100644 index 00000000..f2a42849 Binary files /dev/null and b/docs/assets/etl.png differ diff --git a/docs/assets/etl.svg b/docs/assets/etl.svg deleted file mode 100644 index 33d386c7..00000000 --- a/docs/assets/etl.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - diff --git a/docs/design/etl-crate-design.md b/docs/design/etl-crate-design.md deleted file mode 100644 index 2abe526d..00000000 --- a/docs/design/etl-crate-design.md +++ /dev/null @@ -1,60 +0,0 @@ -Applications can use data sources and destinations from `etl` to build a data pipeline to continually copy data from the source to the destination. For example, a data pipeline to copy data from Postgres to BigQuery takes about 100 lines of Rust. - -There are three components in a data pipeline: - -1. A data source -2. A data destination -3. A pipline - -The data source is an object from where data will be copied. The data destination is an object to which data will be copied. The pipeline is an object which drives the data copy operations from the source to the destination. - -``` - +----------+ +-----------------+ - | | | | - | Source |---- Data Pipeline --->| Destination | - | | | | - +----------+ +-----------------+ -``` - -So roughly you write code like this: - -```rust -let postgres_source = PostgresSource::new(...); -let bigquery_destination = BigQueryDestination::new(..); -let pipeline = DataPipeline(postgres_source, bigquery_destination); -pipeline.start(); -``` - -Of course, the real code is more than these four lines, but this is the basic idea. For a complete example look at the [bigquery example](https://github.com/supabase/etl/blob/main/etl/examples/bigquery.rs). - -### Data Sources - -A data source is the source for data which the pipeline will copy to the data destination. Currently, the repository has only one data source: [`PostgresSource`](https://github.com/supabase/etl/blob/main/etl/src/pipeline/sources/postgres.rs). `PostgresSource` is the primary data source; data in any other source or destination would have originated from it. - -### Data Destinations - -A data destination is where the data from a data source is copied. There are two kinds of data destinations. Those which retain the essential nature of data coming out of a `PostgresSource` and those which don't. The former kinds of data destinations can act as a data source in future. The latter kind can't act as a data source and are data's final resting place. - -For instance, [`BigQueryDestination`](https://github.com/supabase/etl/blob/main/etl/src/pipeline/destinations/bigquery.rs) ensures that the change data capture (CDC) stream coming in from a source is materialized into tables in a BigQuery database. Once this lossy data transformation is done, it can not be used as a CDC stream again. - -Contrast this with a potential future destination `S3Destination` or `KafkaDestination` which just copies the CDC stream as is. The data deposited in the destination can later be used as if it was coming from Postgres directly. - -### Data Pipeline - -A data pipeline encapsulates the business logic to copy the data from the source to the destination. It also orchestrates resumption of the CDC stream from the exact location it was last stopped at. The data destination participates in this by persisting the resumption state and returning it to the pipeline when it restarts. - -If a data destination is not transactional (e.g. `S3Destination`), it is not always possible to keep the CDC stream and the resumption state consistent with each other. This can result in these non-transactional destinations having duplicate portions of the CDC stream. Data pipeline helps in deduplicating these duplicate CDC events when the data is being copied over to a transactional store like BigQuery. - -Finally, the data pipeline reports back the log sequence number (LSN) upto which the CDC stream has been copied in the destination to the `PostgresSource`. This allows the Postgres database to reclaim disk space by removing WAL segment files which are no longer required by the data destination. - -``` - +----------+ +-----------------+ - | | | | - | Source |<---- LSN Numbers -----| Destination | - | | | | - +----------+ +-----------------+ -``` - -### Kinds of Data Copies - -CDC stream is not the only kind of data a data pipeline performs. There's also full table copy, aka backfill. These two kinds can be performed either together or separately. For example, a one-off data copy can use the backfill. But if you want to regularly copy data out of Postgres and into your OLAP database, backfill and CDC stream both should be used. Backfill to get the intial copies of the data and CDC stream to keep those copies up to date and changes in Postgres happen to the copied tables. diff --git a/docs/design/index.md b/docs/design/index.md deleted file mode 100644 index e69de29b..00000000 diff --git a/docs/explanation/architecture.md b/docs/explanation/architecture.md new file mode 100644 index 00000000..b62abfa8 --- /dev/null +++ b/docs/explanation/architecture.md @@ -0,0 +1,4 @@ +# ETL Architecture + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/explanation/crate-structure.md b/docs/explanation/crate-structure.md new file mode 100644 index 00000000..4450f224 --- /dev/null +++ b/docs/explanation/crate-structure.md @@ -0,0 +1,4 @@ +# Crate Structure + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/explanation/design.md b/docs/explanation/design.md new file mode 100644 index 00000000..1e0d3261 --- /dev/null +++ b/docs/explanation/design.md @@ -0,0 +1,4 @@ +# Design Philosophy + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/explanation/index.md b/docs/explanation/index.md new file mode 100644 index 00000000..44511639 --- /dev/null +++ b/docs/explanation/index.md @@ -0,0 +1,4 @@ +# Explanation + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/explanation/performance.md b/docs/explanation/performance.md new file mode 100644 index 00000000..c963daad --- /dev/null +++ b/docs/explanation/performance.md @@ -0,0 +1,4 @@ +# Performance Model + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/explanation/replication.md b/docs/explanation/replication.md new file mode 100644 index 00000000..04613fb6 --- /dev/null +++ b/docs/explanation/replication.md @@ -0,0 +1,4 @@ +# Replication Protocol + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/getting-started/first-pipeline.md b/docs/getting-started/first-pipeline.md new file mode 100644 index 00000000..002d829a --- /dev/null +++ b/docs/getting-started/first-pipeline.md @@ -0,0 +1,4 @@ +# Your First Pipeline + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md new file mode 100644 index 00000000..a02e348a --- /dev/null +++ b/docs/getting-started/installation.md @@ -0,0 +1,4 @@ +# Installation + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/getting-started/quickstart.md b/docs/getting-started/quickstart.md new file mode 100644 index 00000000..56ac1202 --- /dev/null +++ b/docs/getting-started/quickstart.md @@ -0,0 +1,4 @@ +# Quick Start + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/guides/database-setup.md b/docs/guides/database-setup.md deleted file mode 100644 index 10871a69..00000000 --- a/docs/guides/database-setup.md +++ /dev/null @@ -1,79 +0,0 @@ -# Database Setup Guide - -This guide explains how to set up and initialize the database using the `init_db.sh` script. - -## Local Development Setup - -For local development, we use a single PostgreSQL database that serves the API, replicator and all tests. This means: - -- One database cluster for everything (API, replicator, and testing) -- The database cluster runs on port 5430 by default -- All components (API and replicator) connect to this same database cluster -- When running tests, they use the same database cluster but with a different database per test - -## Prerequisites - -Before running the script, ensure you have the following installed: -- PostgreSQL client (`psql`) -- Docker (if you want to run PostgreSQL in a container) -- Rust toolchain -- SQLx CLI (if you plan to run migrations) - -To install SQLx CLI, run: -```bash -cargo install --version='~0.7' sqlx-cli --no-default-features --features rustls,postgres -``` - -## Environment Variables - -The script uses the following environment variables (with their default values): - -| Variable | Default | Description | -|------------------------|-----------|----------------------------------------| -| `POSTGRES_USER` | postgres | Database username | -| `POSTGRES_PASSWORD` | postgres | Database password | -| `POSTGRES_DB` | postgres | Database name | -| `POSTGRES_PORT` | 5430 | Port to run PostgreSQL on | -| `POSTGRES_HOST` | localhost | Database host | -| `POSTGRES_DATA_VOLUME` | - | Path for persistent storage (optional) | - -## Usage - -### Basic Usage - -Run the script from the repository directory: - -```bash -./scripts/init_db.sh -``` - -This will: -1. Start a PostgreSQL container (if Docker is available) -2. Wait for the database to be ready -3. Run database migrations (if chosen) - -### Advanced Options - -#### Skip Docker Container - -To skip the Docker container setup (useful if you're already running PostgreSQL locally): - -```bash -SKIP_DOCKER=1 ./scripts/init_db.sh -``` - -#### Skip Migrations - -To skip running database migrations: - -```bash -SKIP_MIGRATIONS=1 ./scripts/init_db.sh -``` - -#### Persistent Storage - -To specify a custom location for persistent storage: - -```bash -POSTGRES_DATA_VOLUME="/path/to/storage" ./scripts/init_db.sh -``` diff --git a/docs/guides/index.md b/docs/guides/index.md deleted file mode 100644 index e69de29b..00000000 diff --git a/docs/how-to/configure-postgres.md b/docs/how-to/configure-postgres.md new file mode 100644 index 00000000..b208e601 --- /dev/null +++ b/docs/how-to/configure-postgres.md @@ -0,0 +1,4 @@ +# Configure PostgreSQL + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/how-to/custom-destinations.md b/docs/how-to/custom-destinations.md new file mode 100644 index 00000000..2a4a3434 --- /dev/null +++ b/docs/how-to/custom-destinations.md @@ -0,0 +1,4 @@ +# Implement Custom Destinations + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/how-to/debugging.md b/docs/how-to/debugging.md new file mode 100644 index 00000000..199c490e --- /dev/null +++ b/docs/how-to/debugging.md @@ -0,0 +1,4 @@ +# Debug Replication Issues + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/how-to/index.md b/docs/how-to/index.md new file mode 100644 index 00000000..ffd78f6d --- /dev/null +++ b/docs/how-to/index.md @@ -0,0 +1,4 @@ +# How-to Guides + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/how-to/performance.md b/docs/how-to/performance.md new file mode 100644 index 00000000..7826051e --- /dev/null +++ b/docs/how-to/performance.md @@ -0,0 +1,4 @@ +# Optimize Performance + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/how-to/schema-changes.md b/docs/how-to/schema-changes.md new file mode 100644 index 00000000..aabf712e --- /dev/null +++ b/docs/how-to/schema-changes.md @@ -0,0 +1,4 @@ +# Handle Schema Changes + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/how-to/testing.md b/docs/how-to/testing.md new file mode 100644 index 00000000..48234cc7 --- /dev/null +++ b/docs/how-to/testing.md @@ -0,0 +1,4 @@ +# Set Up Tests + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index 13c63d31..160c6859 100644 --- a/docs/index.md +++ b/docs/index.md @@ -5,6 +5,9 @@ hide: # ETL +!!! info "Coming Soon" + ETL docs are coming soon! + Welcome to the ETL project, a Rust-based collection of tooling designed to build efficient and reliable Postgres replication applications. This documentation page provides an overview of the ETL project, the benefits of using ETL, the advantages of implementing it in Rust, and an introduction to Postgres logical replication. It also outlines the resources available in this documentation to help you get started. ## What is ETL @@ -55,16 +58,3 @@ The ETL crate is written in Rust to leverage the language's unique strengths, ma - **Ecosystem Integration**: Rust’s growing ecosystem and compatibility with modern cloud and database technologies make it a natural fit for Postgres-focused infrastructure. By using Rust, the ETL crate provides a fast, safe, and scalable solution for building Postgres replication applications. - -## What Does the Documentation Cover? - -This documentation is designed to help you effectively use the ETL crate to build Postgres replication applications. It includes the following resources: - -- [**Tutorials**](tutorials/index.md): Step-by-step guides to get started with the ETL crate, including setting up a basic data pipeline, configuring Postgres logical replication, and connecting to destinations like BigQuery or other OLAP databases. Check the [examples folder](https://github.com/supabase/etl/tree/main/etl/examples) for practical code samples. -- [**Guides**](guides/index.md): In-depth explanations of key concepts, such as building custom data pipelines, handling change data capture (CDC), and optimizing performance for specific use cases. -- [**Reference**](reference/index.md): Detailed documentation of the crate's API, including modules like `etl::pipeline`, `etl::sources::postgres`, and available destinations (e.g., `BigQueryDestination`). Each destination is feature-gated, so you can enable only what you need. -- [**Design**](design/index.md): Overview of the crate's architecture, including its modular pipeline structure, source-destination flow, and extensibility for custom integrations. - -The ETL crate is distributed under the [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0). See the [LICENSE](https://github.com/supabase/etl/blob/main/LICENSE) file for more information. - -Ready to start building your Postgres replication pipeline? Dive into the [Getting started guide](tutorials/getting-started.md) to set up your first pipeline. diff --git a/docs/reference/index.md b/docs/reference/index.md index e69de29b..1d807483 100644 --- a/docs/reference/index.md +++ b/docs/reference/index.md @@ -0,0 +1,4 @@ +# Reference + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css index 61217293..55b24920 100644 --- a/docs/stylesheets/extra.css +++ b/docs/stylesheets/extra.css @@ -1,12 +1,135 @@ -[data-md-color-scheme="slate"] { - --md-default-bg-color:#121212; - --md-default-fg-color--light: white; - --md-code-bg-color: #2a2929; - --md-code-hl-keyword-color: #569cd6; +/* Custom Supabase color palette */ +:root { + --tab-size-preference: 4; + + /* Custom primary color: #11181c */ + --md-primary-fg-color: #11181c; + --md-primary-fg-color--light: #11181c; + --md-primary-fg-color--dark: #0a0e10; + + /* Custom accent color: #34b27b */ + --md-accent-fg-color: #34b27b; + --md-accent-fg-color--transparent: rgba(52, 178, 123, 0.1); } -.md-header, .md-tabs { - background-color: var(--md-default-bg-color); - color: var(--md-default-fg-color--light); - font-family: var(--md-text-font); +/* Override Material Design custom palette */ +[data-md-color-primary="custom"] { + --md-primary-fg-color: #11181c; + --md-primary-fg-color--light: #11181c; + --md-primary-fg-color--dark: #0a0e10; +} + +[data-md-color-accent="custom"] { + --md-accent-fg-color: #34b27b; + --md-accent-fg-color--transparent: rgba(52, 178, 123, 0.1); +} + +/* Code and pre elements */ +pre, code { + tab-size: var(--tab-size-preference); +} + +/* Ensure good readability */ +.md-typeset { + font-size: 0.8rem; + line-height: 1.6; +} + +/* Code styling improvements */ +.md-typeset code { + font-size: 0.85em; + color: var(--md-accent-fg-color); + font-weight: 500; +} + +/* Clean navigation */ +.md-nav__item .md-nav__link--active { + color: var(--md-accent-fg-color); + font-weight: 500; +} + +/* Table styling */ +.md-typeset table:not([class]) th { + background-color: var(--md-accent-fg-color--transparent); +} + +/* Logo styling - ensure visibility */ +.md-header__button.md-logo img { + width: 1.5rem; + height: 1.5rem; +} + +/* Header styling with custom primary color */ +.md-header { + background-color: var(--md-primary-fg-color); +} + +.md-header__title { + color: #ffffff; + font-weight: 600; +} + +.md-header__button { + color: rgba(255, 255, 255, 0.7); +} + +.md-header__button:hover { + color: #ffffff; +} + +/* Tabs styling */ +.md-tabs { + background-color: var(--md-primary-fg-color); + border-bottom: 1px solid rgba(255, 255, 255, 0.1); +} + +.md-tabs__link { + color: rgba(255, 255, 255, 0.7); + font-weight: 500; +} + +.md-tabs__link:hover { + color: #ffffff; +} + +.md-tabs__link--active { + color: var(--md-accent-fg-color); + font-weight: 600; +} + +/* Links */ +.md-typeset a { + color: var(--md-accent-fg-color); + text-decoration: none; + font-weight: 500; +} + +.md-typeset a:hover { + text-decoration: underline; +} + +/* Buttons */ +.md-button { + background-color: var(--md-accent-fg-color); + border-color: var(--md-accent-fg-color); + color: #ffffff; +} + +.md-button:hover { + background-color: #2a9d65; + border-color: #2a9d65; +} + +/* Footer */ +.md-footer { + background-color: var(--md-primary-fg-color); + color: #ffffff; +} + +.md-footer__link { + color: rgba(255, 255, 255, 0.7); +} + +.md-footer__link:hover { + color: var(--md-accent-fg-color); } \ No newline at end of file diff --git a/docs/tutorials/getting-started.md b/docs/tutorials/getting-started.md deleted file mode 100644 index e69de29b..00000000 diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index e69de29b..9cc1257f 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -0,0 +1,4 @@ +# Tutorials + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/tutorials/memory-destination.md b/docs/tutorials/memory-destination.md new file mode 100644 index 00000000..e97cc088 --- /dev/null +++ b/docs/tutorials/memory-destination.md @@ -0,0 +1,4 @@ +# Memory Destination + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/docs/tutorials/testing-pipelines.md b/docs/tutorials/testing-pipelines.md new file mode 100644 index 00000000..44bd431d --- /dev/null +++ b/docs/tutorials/testing-pipelines.md @@ -0,0 +1,4 @@ +# Testing Pipelines + +!!! info "Coming Soon" + This page is under development. \ No newline at end of file diff --git a/mkdocs.yaml b/mkdocs.yaml index 94281c42..6bbccef5 100644 --- a/mkdocs.yaml +++ b/mkdocs.yaml @@ -1,64 +1,69 @@ site_name: ETL site_url: https://supabase.github.io/etl -site_description: Build Postgres replication apps in Rust +site_description: Stream your Postgres data anywhere in real-time. Simple Rust building blocks for change data capture (CDC) pipelines. copyright: Copyright © Supabase repo_name: supabase/etl repo_url: https://github.com/supabase/etl edit_uri: edit/main/docs/ nav: - - Home: - - index.md + - Home: index.md - Tutorials: - - Tutorials: tutorials/index.md - - Getting started: tutorials/getting-started.md - - Guides: - - Guides: guides/index.md - - Database setup: guides/database-setup.md - - Design: - - Design: design/index.md - - ETL crate design: design/etl-crate-design.md + - Overview: tutorials/index.md + - Installation: getting-started/installation.md + - Quick Start: getting-started/quickstart.md + - Your First Pipeline: getting-started/first-pipeline.md + - Working with Destinations: tutorials/memory-destination.md + - Testing Pipelines: tutorials/testing-pipelines.md + - How-to Guides: + - Overview: how-to/index.md + - Configure PostgreSQL: how-to/configure-postgres.md + - Handle Schema Changes: how-to/schema-changes.md + - Create Custom Destinations: how-to/custom-destinations.md + - Debug Replication Issues: how-to/debugging.md + - Optimize Performance: how-to/performance.md + - Test Your Pipelines: how-to/testing.md - Reference: - - Reference: reference/index.md + - Overview: reference/index.md + - Explanation: + - Overview: explanation/index.md + - Architecture: explanation/architecture.md + - Replication Protocol: explanation/replication.md + - Design Philosophy: explanation/design.md + - Performance Characteristics: explanation/performance.md + - Crate Structure: explanation/crate-structure.md theme: name: "material" favicon: "assets/favicon.ico" - logo: "assets/etl.svg" + logo: "assets/etl.png" homepage: https://supabase.github.io/etl features: - - content.code.annotate - - content.code.copy - - content.action.edit - navigation.expand + - navigation.sections + - navigation.indexes - navigation.tabs - navigation.tabs.sticky - - navigation.indexes - - navigation.footer - navigation.top + - navigation.footer - toc.follow - - search.suggest - - search.share + - content.code.copy + - content.code.annotate palette: - # Palette toggle for light mode - - media: "(prefers-color-scheme: light)" - scheme: default + - scheme: default + primary: custom + accent: custom toggle: - icon: material/weather-night + icon: material/brightness-7 name: Switch to dark mode - primary: green - accent: green - - # Palette toggle for dark mode - - media: "(prefers-color-scheme: dark)" - scheme: slate + - scheme: slate + primary: custom + accent: custom toggle: - icon: material/weather-sunny + icon: material/brightness-4 name: Switch to light mode - primary: green - accent: green font: - text: "IBM Plex Sans" + text: Roboto code: Roboto Mono icon: repo: material/github @@ -81,23 +86,17 @@ extra: name: ETL on GitHub markdown_extensions: - - toc: - permalink: true - permalink_title: Anchor link to this section for reference - pymdownx.highlight: linenums: true guess_lang: false use_pygments: true pygments_style: default + - pymdownx.superfences - pymdownx.tabbed: alternate_style: true - - pymdownx.tasklist: - custom_checkbox: true - - pymdownx.superfences - - pymdownx.keys + - pymdownx.snippets + - pymdownx.tasklist - admonition - - tables - - footnotes plugins: - search diff --git a/res/etl-logo-extended.png b/res/etl-logo-extended.png new file mode 100644 index 00000000..7c73ee5b Binary files /dev/null and b/res/etl-logo-extended.png differ diff --git a/res/etl-logo.png b/res/etl-logo.png new file mode 100644 index 00000000..f2a42849 Binary files /dev/null and b/res/etl-logo.png differ