This section explains how the efficiency of the Monitoring Pace Scheduler is evaluated by comparing the baseline (fixed interval) and dynamic (adaptive interval) monitoring strategies.
We simulate load using Gatling, capture Prometheus traffic using tcpdump, and analyze the results to measure the total data transmitted and average bandwidth.
To evaluate the efficiency of dynamic vs baseline monitoring:
We use Gatling to generate a realistic and reproducible workload on the target service.
See full setup and scenario in docs/gatling_simulation.md
Start the two monitoring strategies (baseline and dynamic):
python3 baseline.py --duration 3600
python3 scheduler.py --duration 3600Each script will expose Prometheus metrics on a different port.
Use tcpdump to capture and isolate the network traffic generated by each group:
sudo timeout 3600 tcpdump -i lo -w baseline-group.pcap port 9092 -v
sudo timeout 3600 tcpdump -i lo -w dynamic-group.pcap port 9091 -vUse the benchmarking script to extract bandwidth and volume from the .pcap files:
./pcap_benchmark.sh baseline-group.pcap 3600
./pcap_benchmark.sh dynamic-group.pcap 3600This script computes:
- Total data volume (bytes / KB / MB)
- Average bandwidth usage (bps / kbps / Mbps)
To assess how accurately the Dynamic Group captures the evolution of the metric (compared to the Baseline Group), you can use the evaluation notebook:
jupyter notebook src/benchmark/evaluation.ipynbThis notebook :
-
Aligns both groups on a common time axis (in seconds)
-
Performs linear interpolation of the dynamic metric values
-
Computes the following metrics:
- Precision : Defined as the percentage of dynamic points that exactly match the baseline points
- MAE: Mean Absolute Error
- MAPE: Mean Absolute Percentage Error
- Overlay Distance (
overlay_dx) - Median scrape interval
-
Generates a plot comparing the baseline, dynamic, and interpolated values
This analysis helps visualize and quantify the trade-off between reduced data and metric fidelity.
The notebook is complementary but not required to run the benchmark pipeline.
Required Python packages:
pandas,numpy,matplotlib,scipy,scikit-learn
You can modify the tolerance or interpolation method as needed for other experiments.