Skip to content

[tool] Improve benchmark tool #21634

@siyuanfoundation

Description

@siyuanfoundation

Background

The current tools/benchmark in etcd is a low-level tool that directly uses the gRPC client to generate load and report performance with limited measuring dimension. However, it has several limitations that make it difficult to use for comprehensive performance analysis:

  • Lack of Time-Series Data: It primarily outputs a single summary number at the end of the run (e.g., average latency). This hides performance spikes, degradation over time, or warming-up effects.
  • No Resource Monitoring: It does not monitor or correlate etcd server resource usage (specifically memory footprint) during the benchmark run.
  • Lack of Guidance: There is no standard guidance or set of profiles defined for what tests to run to measure performance consistently, making it hard to compare results across different setups.
  • Manual Comparison: Comparing results across different branches or commits requires manual checkout, build, and execution, making regression testing tedious.
  • Environment Setup: Running tests in a clean, isolated environment requires manual setup of etcd instances.

Proposal

We could improve the tool in the following areas:

  • Create Docker setup for single-node etcd and benchmark client.
  • Implement time-series data collection for latency and resource usage (including etcd memory monitoring).
  • Implement branch/commit comparison script to automate checkout, build, and diff.
  • Define and implement standardized benchmark test profiles ("ReadHeavy", etc.).
  • Implement visualization (e.g., terminal ASCII or static image generation).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions