Background
The current tools/benchmark in etcd is a low-level tool that directly uses the gRPC client to generate load and report performance with limited measuring dimension. However, it has several limitations that make it difficult to use for comprehensive performance analysis:
- Lack of Time-Series Data: It primarily outputs a single summary number at the end of the run (e.g., average latency). This hides performance spikes, degradation over time, or warming-up effects.
- No Resource Monitoring: It does not monitor or correlate etcd server resource usage (specifically memory footprint) during the benchmark run.
- Lack of Guidance: There is no standard guidance or set of profiles defined for what tests to run to measure performance consistently, making it hard to compare results across different setups.
- Manual Comparison: Comparing results across different branches or commits requires manual checkout, build, and execution, making regression testing tedious.
- Environment Setup: Running tests in a clean, isolated environment requires manual setup of etcd instances.
Proposal
We could improve the tool in the following areas:
Background
The current
tools/benchmarkin etcd is a low-level tool that directly uses the gRPC client to generate load and report performance with limited measuring dimension. However, it has several limitations that make it difficult to use for comprehensive performance analysis:Proposal
We could improve the tool in the following areas: