Skip to content

Commit 4602401

Browse files
CopilotBordapre-commit-ci[bot]Copilotclaude
authored
Add cache analytics and observability framework (#320)
* Initial plan * Implement cache analytics and observability framework - Add CacheMetrics class for thread-safe metric collection - Track hits, misses, latencies, stale hits, recalculations - Integrate metrics into all cache backends (memory, pickle, mongo, redis, sql) - Add enable_metrics and metrics_sampling_rate parameters to @cachier - Create MetricsExporter base class and PrometheusExporter implementation - Add comprehensive tests for metrics functionality - Add metrics_example.py demonstrating usage Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * Add metrics documentation and fix linting issues - Add Cache Analytics section to README.rst - Fix ruff linting issues in metrics.py and prometheus.py - Add prometheus_exporter_example.py - All tests passing (19/19) - Documentation complete with examples Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * Add comprehensive implementation documentation - Add METRICS_IMPLEMENTATION.md with technical details - Document API reference, design decisions, and future enhancements - All tests passing, implementation complete - Ready for code review Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add `assert` to ensure `start_time` is not `None` before latency recording * Update README.rst Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update examples/metrics_example.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update src/cachier/metrics.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Address PR review feedback - complete implementation - Use time.perf_counter() for latency measurement (monotonic clock) in core.py and metrics.py - Fix Prometheus text exporter to include ALL metrics: hits, misses, hit_rate, avg_latency_ms, stale_hits, recalculations, entry_count, cache_size_bytes, size_limit_rejections - Fix repeated HELP/TYPE headers by emitting them once per metric type - Add host parameter to PrometheusExporter (default: 127.0.0.1) for security - Implement cache size tracking in base core and memory core: * _update_size_metrics() to trigger updates * _get_entry_count() and _get_total_size() methods * Call after set_entry, clear_cache, delete_stale_entries - All 19 tests passing - Cache size metrics now working correctly Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * Address remaining PR review feedback - Fix counter increment with deltas (comment 2731262796): Track last-seen values to calculate deltas instead of incrementing with absolute values - Implement prometheus_client mode with custom collector (comment 2731262813): Add CachierCollector that pulls metrics from registered functions at scrape time, properly populating /metrics endpoint - Add test coverage for prometheus_client mode (comment 2731262747): Add tests for use_prometheus_client=True fallback behavior - All 21 tests passing (19 existing + 2 new) Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address PR review feedback - code quality improvements - Use absolute imports in base.py (comment 2744902663) - Move prometheus example instructions to module docstring (comment 2744908071) - Use contextlib.suppress for exception handling (comments 2744912772, SIM105) - Remove trailing commas for 120 line length (comments 2744919532, 2744929433) - Add comment explaining yields in collector (comment 2744926357) - Use single formatted string appends (comment 2744927877) - Fix README prometheus_client mode documentation (comment 2744928794) - Clarify cache size metrics backend support (comment 2744928804) - Pass host parameter to start_http_server (comment 2744928825) - Fix metric names consistency with _total suffix (comment 2744928839) - Remove unused _last_seen dict (comment 2744928850) - Use monotonic clock for windowed latency calculations (comment 2744928866) - Record miss on stale hit for accurate hit rate (comment 2744928891) - Add explanatory comment to except clause (comment 2744928901) - Don't swallow exceptions in start() method (comment 2744928818) All 21 tests passing Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * Refactor metrics example to use single formatted print statement - Replace multiple trivial print calls with one aggregated formatted f-string (comment 2744970314) - Improves code conciseness and readability - All tests passing (14/14) Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * Consolidate prometheus metric headers and fix imports - Combine three-line append patterns into single formatted strings (comment 2744927877) - Use absolute imports in sql.py instead of relative imports (comment 2744972453) - Improve code conciseness in prometheus text exporter - All 7 exporter tests passing Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Align S3 backend with metrics framework - Add metrics parameter to _S3Core.__init__() - Pass metrics to S3 core in cachier decorator - Add metrics import to s3.py - Update S3 core docstring to document metrics parameter - Ensures S3 backend supports metrics like all other backends Addresses comment 4010458432: aligns with latest codebase Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> * Refactor Prometheus exporter to use `_get_func_metrics` helper for cleaner metrics handling * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update linters' configurations and clean up docstring conventions * Update linters' configurations and clean up docstring conventions * Fix metrics framework: async instrumentation, Prometheus consistency, and cleanup - Instrument _call_async with full cache_metrics coverage matching _call (hits, misses, stale hits, recalculations, wait timeouts, latency on every code path) - Fix _calc_entry_async to record size_limit_rejection when entry is not stored - Fix _generate_text_metrics to snapshot all functions in one lock acquisition, preventing internally inconsistent Prometheus scrapes - Replace global REGISTRY with per-instance CollectorRegistry in PrometheusExporter, eliminating silent double-registration data loss - Add cachier_wait_timeouts_total to Prometheus text export and custom collector - Make export_metrics non-abstract in MetricsExporter ABC (concrete no-op default) - Add type annotations to CachierCollector and MetricsHandler inner classes - Move random import to module level in metrics.py; remove dead _monotonic_start and _wall_start attributes - Document stale-as-miss counting behavior and total_size_bytes backend limitation in MetricSnapshot docstring - Remove METRICS_IMPLEMENTATION.md from repository root - Add 13 new tests: async hit/miss/stale tracking, sampling_rate=0.0 boundary, empty window_sizes, double-instantiation isolation, text metrics consistency Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Achieve 100% coverage on metrics and exporters modules - Add # pragma: no cover to unreachable defensive guards (ImportError handler for optional prometheus_client, dead early-return in _setup_collector) - Fix stop() to call server_close() and join the server thread, eliminating ResourceWarning on socket cleanup - Add 17 new tests to reach 100% branch coverage: - test_metrics_wait_timeout_direct: exercises record_wait_timeout directly - test_metrics_sampling_rate_zero_skips_all_methods: covers early-return branches in record_stale_hit, record_wait_timeout, record_size_limit_rejection, and record_latency when sampling_rate=0.0 - test_metrics_context_manager / test_metrics_context_manager_none: covers MetricsContext.__enter__ and __exit__ with and without a metrics object - test_prometheus_export_metrics_noop: covers the export_metrics no-op path - test_prometheus_text_metrics_skips_none_metrics: covers the m-is-None branch in _generate_text_metrics - test_prometheus_start_stop_simple_server / _prometheus_server: covers start() and stop() for both server backends - test_prometheus_simple_server_404 / _prometheus_server_404: covers the 404 response path in both MetricsHandler.do_GET implementations - test_prometheus_collector_collect / _collect_empty / _collect_skips_none_metrics: covers CachierCollector.collect() including the m-is-None skip branch - test_prometheus_client_not_available: covers PrometheusExporter fallback when PROMETHEUS_CLIENT_AVAILABLE is patched to False - test_prometheus_stop_when_not_started: covers stop() when _server is None Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Refactor metrics examples: modularize examples into functions and add `main()` entry point * Refactor Prometheus exporter and cache metrics framework - Extract `CachierCollector` as a top-level class for cleaner modularity - Use `MetricsContext` for consistent cache metrics tracking across sync and async paths - Simplify metric counter updates with a shared `_record_counter` helper method - Refactor Prometheus text metric generation to eliminate redundancy * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactor: compact Prometheus client imports and docstrings in metrics framework * Refactor: prefix `set_entry` and `aset_entry` with `_` across all cores and centralize size-limit metric recording logic * Refactor: replace `set_entry` with `_set_entry` in async methods across cores and refine `TYPE_CHECKING` import logic * Refactor: rename `MetricsContext` variable to `_mctx` for consistent naming across sync and async methods * Refactor: update monkeypatching to reflect `_set_entry` and `_aset_entry` renaming in tests * Apply suggestions from code review Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactor: simplify cutoff calculation in metrics using ternary operator * Refactor: rename `set_entry` to `_set_entry`, refine size-limit logic, and add metric for size-limit rejections * Add tests for metrics: validate `entry_count` and `total_size_bytes` for memory and pickle backends * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add tests for `_BaseCore`: metric hooks default values and timeout behavior * Add tests for metrics: refactor sampling rate tests and add Prometheus exporter mocks * Remove outdated tests for overwrite/skip cache and Prometheus exporter fallback * fix(metrics): address review findings before merge - Replace all hardcoded test ports (9093-19200) with port=0 and read actual port from server_address[1] to prevent CI port collisions - Clarify CacheMetrics.window_sizes docstring: windowing is not automatic; callers must pass window= explicitly to get_stats() - Add README note that entry_count/total_size_bytes are populated for the memory backend only; all other backends report 0 - Standardize MetricsContext guards to 'if self._m is not None:' - Remove dead _init_prometheus_metrics no-op method and its call site - Replace deprecated typing.Deque/Dict with deque[...]/dict[...] builtins Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: Shay Palachy-Affek <shaypal5@users.noreply.github.com>
1 parent 4e181ca commit 4602401

22 files changed

Lines changed: 2849 additions & 163 deletions

README.rst

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ Current features
6363

6464
* Thread-safety.
6565
* **Per-call max age:** Specify a maximum age for cached values per call.
66+
* **Cache analytics and observability:** Track cache performance metrics including hit rates, latencies, and more.
6667

6768
Cachier is **NOT**:
6869

@@ -327,6 +328,104 @@ Cache `None` Values
327328
By default, ``cachier`` does not cache ``None`` values. You can override this behaviour by passing ``allow_none=True`` to the function call.
328329

329330

331+
Cache Analytics and Observability
332+
==================================
333+
334+
Cachier provides built-in metrics collection to monitor cache performance in production environments. This feature is particularly useful for understanding cache effectiveness, identifying optimization opportunities, and debugging performance issues.
335+
336+
Enabling Metrics
337+
----------------
338+
339+
Enable metrics by setting ``enable_metrics=True`` when decorating a function:
340+
341+
.. code-block:: python
342+
343+
from cachier import cachier
344+
345+
@cachier(backend='memory', enable_metrics=True)
346+
def expensive_operation(x):
347+
return x ** 2
348+
349+
# Access metrics
350+
stats = expensive_operation.metrics.get_stats()
351+
print(f"Hit rate: {stats.hit_rate}%")
352+
print(f"Avg latency: {stats.avg_latency_ms}ms")
353+
354+
Tracked Metrics
355+
---------------
356+
357+
The metrics system tracks:
358+
359+
* **Cache hits and misses**: Number of cache hits/misses and hit rate percentage
360+
* **Operation latencies**: Average time for cache operations
361+
* **Stale cache hits**: Number of times stale cache entries were accessed
362+
* **Recalculations**: Count of cache recalculations triggered
363+
* **Wait timeouts**: Timeouts during concurrent calculation waits
364+
* **Size limit rejections**: Entries rejected due to ``entry_size_limit``
365+
* **Cache size (memory backend only)**: Number of entries and total size in bytes for the in-memory cache core
366+
367+
Note: ``entry_count`` and ``total_size_bytes`` are populated only for the memory backend. Other backends (pickle, redis, sql, mongo) currently always report ``0`` for these fields.
368+
369+
Sampling Rate
370+
-------------
371+
372+
For high-traffic functions, you can reduce overhead by sampling a fraction of operations:
373+
374+
.. code-block:: python
375+
376+
@cachier(enable_metrics=True, metrics_sampling_rate=0.1) # Sample 10% of calls
377+
def high_traffic_function(x):
378+
return x * 2
379+
380+
Exporting to Prometheus
381+
------------------------
382+
383+
Export metrics to Prometheus for monitoring and alerting:
384+
385+
.. code-block:: python
386+
387+
from cachier import cachier
388+
from cachier.exporters import PrometheusExporter
389+
390+
@cachier(backend='redis', enable_metrics=True)
391+
def my_operation(x):
392+
return x ** 2
393+
394+
# Set up Prometheus exporter
395+
# use_prometheus_client controls whether metrics are exposed via the prometheus_client
396+
# registry (True) or via Cachier's own HTTP handler (False). In both modes, metrics for
397+
# registered functions are collected live at scrape time.
398+
exporter = PrometheusExporter(port=9090, use_prometheus_client=True)
399+
exporter.register_function(my_operation)
400+
exporter.start()
401+
402+
# Metrics available at http://localhost:9090/metrics
403+
404+
The exporter provides metrics in Prometheus text format, compatible with standard Prometheus scraping, in both ``use_prometheus_client=True`` and ``use_prometheus_client=False`` modes. When ``use_prometheus_client=True``, Cachier registers a custom collector with ``prometheus_client`` that pulls live statistics from registered functions at scrape time, so scraped values reflect the current state of the cache. When ``use_prometheus_client=False``, Cachier serves the same metrics directly without requiring the ``prometheus_client`` dependency.
405+
406+
Programmatic Access
407+
-------------------
408+
409+
Access metrics programmatically for custom monitoring:
410+
411+
.. code-block:: python
412+
413+
stats = my_function.metrics.get_stats()
414+
415+
if stats.hit_rate < 70.0:
416+
print(f"Warning: Cache hit rate is {stats.hit_rate}%")
417+
print(f"Consider increasing cache size or adjusting stale_after")
418+
419+
Reset Metrics
420+
-------------
421+
422+
Clear collected metrics:
423+
424+
.. code-block:: python
425+
426+
my_function.metrics.reset()
427+
428+
330429
Cachier Cores
331430
=============
332431

examples/metrics_example.py

Lines changed: 231 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,231 @@
1+
"""Demonstration of cachier's metrics and observability features."""
2+
3+
import time
4+
from datetime import timedelta
5+
6+
from cachier import cachier
7+
8+
9+
def demo_basic_metrics_tracking():
10+
"""Demonstrate basic metrics tracking."""
11+
print("=" * 60)
12+
print("Example 1: Basic Metrics Tracking")
13+
print("=" * 60)
14+
15+
@cachier(backend="memory", enable_metrics=True)
16+
def expensive_operation(x):
17+
"""Simulate an expensive computation."""
18+
time.sleep(0.1) # Simulate work
19+
return x**2
20+
21+
expensive_operation.clear_cache()
22+
23+
# First call - cache miss
24+
print("\nFirst call (cache miss):")
25+
result1 = expensive_operation(5)
26+
print(f" Result: {result1}")
27+
28+
stats = expensive_operation.metrics.get_stats()
29+
print(f" Hits: {stats.hits}, Misses: {stats.misses}")
30+
print(f" Hit rate: {stats.hit_rate:.1f}%")
31+
print(f" Avg latency: {stats.avg_latency_ms:.2f}ms")
32+
33+
# Second call - cache hit
34+
print("\nSecond call (cache hit):")
35+
result2 = expensive_operation(5)
36+
print(f" Result: {result2}")
37+
38+
stats = expensive_operation.metrics.get_stats()
39+
print(f" Hits: {stats.hits}, Misses: {stats.misses}")
40+
print(f" Hit rate: {stats.hit_rate:.1f}%")
41+
print(f" Avg latency: {stats.avg_latency_ms:.2f}ms")
42+
43+
# Third call with different argument - cache miss
44+
print("\nThird call with different argument (cache miss):")
45+
result3 = expensive_operation(10)
46+
print(f" Result: {result3}")
47+
48+
stats = expensive_operation.metrics.get_stats()
49+
print(f" Hits: {stats.hits}, Misses: {stats.misses}")
50+
print(f" Hit rate: {stats.hit_rate:.1f}%")
51+
print(f" Avg latency: {stats.avg_latency_ms:.2f}ms")
52+
print(f" Total calls: {stats.total_calls}")
53+
54+
55+
def demo_stale_cache_tracking():
56+
"""Demonstrate stale cache tracking."""
57+
print("\n" + "=" * 60)
58+
print("Example 2: Stale Cache Tracking")
59+
print("=" * 60)
60+
61+
@cachier(
62+
backend="memory",
63+
enable_metrics=True,
64+
stale_after=timedelta(seconds=1),
65+
next_time=False,
66+
)
67+
def time_sensitive_operation(x):
68+
"""Operation with stale_after configured."""
69+
return x * 2
70+
71+
time_sensitive_operation.clear_cache()
72+
73+
# Initial call
74+
print("\nInitial call:")
75+
result = time_sensitive_operation(5)
76+
print(f" Result: {result}")
77+
78+
# Call while fresh
79+
print("\nCall while fresh (within 1 second):")
80+
result = time_sensitive_operation(5)
81+
print(f" Result: {result}")
82+
83+
# Wait for cache to become stale
84+
print("\nWaiting for cache to become stale...")
85+
time.sleep(1.5)
86+
87+
# Call after stale
88+
print("Call after cache is stale:")
89+
result = time_sensitive_operation(5)
90+
print(f" Result: {result}")
91+
92+
stats = time_sensitive_operation.metrics.get_stats()
93+
print("\nMetrics after stale access:")
94+
print(f" Hits: {stats.hits}")
95+
print(f" Stale hits: {stats.stale_hits}")
96+
print(f" Recalculations: {stats.recalculations}")
97+
98+
99+
def demo_metrics_sampling():
100+
"""Demonstrate metrics sampling to reduce overhead."""
101+
print("\n" + "=" * 60)
102+
print("Example 3: Metrics Sampling (50% sampling rate)")
103+
print("=" * 60)
104+
105+
@cachier(
106+
backend="memory",
107+
enable_metrics=True,
108+
metrics_sampling_rate=0.5, # Only sample 50% of calls
109+
)
110+
def sampled_operation(x):
111+
"""Operation with reduced metrics sampling."""
112+
return x + 1
113+
114+
sampled_operation.clear_cache()
115+
116+
# Make many calls
117+
print("\nMaking 100 calls with 10 unique arguments...")
118+
for i in range(100):
119+
sampled_operation(i % 10)
120+
121+
stats = sampled_operation.metrics.get_stats()
122+
print("\nMetrics (with 50% sampling):")
123+
print(f" Total calls recorded: {stats.total_calls}")
124+
print(f" Hits: {stats.hits}")
125+
print(f" Misses: {stats.misses}")
126+
print(f" Hit rate: {stats.hit_rate:.1f}%")
127+
print(" Note: Total calls < 100 due to sampling; hit rate is approximately representative of overall behavior.")
128+
129+
130+
def demo_comprehensive_metrics():
131+
"""Demonstrate a comprehensive metrics snapshot."""
132+
print("\n" + "=" * 60)
133+
print("Example 4: Comprehensive Metrics Snapshot")
134+
print("=" * 60)
135+
136+
@cachier(backend="memory", enable_metrics=True, entry_size_limit="1KB")
137+
def comprehensive_operation(x):
138+
"""Operation to demonstrate all metrics."""
139+
if x > 1000:
140+
# Return large data to trigger size limit rejection
141+
return "x" * 2000
142+
return x * 2
143+
144+
comprehensive_operation.clear_cache()
145+
146+
# Generate various metric events
147+
comprehensive_operation(5) # Miss + recalculation
148+
comprehensive_operation(5) # Hit
149+
comprehensive_operation(10) # Miss + recalculation
150+
comprehensive_operation(2000) # Size limit rejection
151+
152+
stats = comprehensive_operation.metrics.get_stats()
153+
print(
154+
f"\nComplete metrics snapshot:\n"
155+
f" Hits: {stats.hits}\n"
156+
f" Misses: {stats.misses}\n"
157+
f" Hit rate: {stats.hit_rate:.1f}%\n"
158+
f" Total calls: {stats.total_calls}\n"
159+
f" Avg latency: {stats.avg_latency_ms:.2f}ms\n"
160+
f" Stale hits: {stats.stale_hits}\n"
161+
f" Recalculations: {stats.recalculations}\n"
162+
f" Wait timeouts: {stats.wait_timeouts}\n"
163+
f" Size limit rejections: {stats.size_limit_rejections}\n"
164+
f" Entry count: {stats.entry_count}\n"
165+
f" Total size (bytes): {stats.total_size_bytes}"
166+
)
167+
168+
169+
def demo_programmatic_monitoring():
170+
"""Demonstrate programmatic cache health monitoring."""
171+
print("\n" + "=" * 60)
172+
print("Example 5: Programmatic Monitoring")
173+
print("=" * 60)
174+
175+
@cachier(backend="memory", enable_metrics=True)
176+
def monitored_operation(x):
177+
"""Operation being monitored."""
178+
return x**3
179+
180+
monitored_operation.clear_cache()
181+
182+
def check_cache_health(func, threshold=80.0):
183+
"""Check if cache hit rate meets threshold."""
184+
stats = func.metrics.get_stats()
185+
if stats.total_calls == 0:
186+
return True, "No calls yet"
187+
188+
if stats.hit_rate >= threshold:
189+
return True, f"Hit rate {stats.hit_rate:.1f}% meets threshold"
190+
else:
191+
return (
192+
False,
193+
f"Hit rate {stats.hit_rate:.1f}% below threshold {threshold}%",
194+
)
195+
196+
# Simulate some usage
197+
print("\nSimulating cache usage...")
198+
for i in range(20):
199+
monitored_operation(i % 5)
200+
201+
# Check health
202+
is_healthy, message = check_cache_health(monitored_operation, threshold=70.0)
203+
print("\nCache health check:")
204+
print(f" Status: {'OK HEALTHY' if is_healthy else 'UNHEALTHY'}")
205+
print(f" {message}")
206+
207+
stats = monitored_operation.metrics.get_stats()
208+
print(f" Details: {stats.hits} hits, {stats.misses} misses")
209+
210+
211+
def main():
212+
"""Run all metrics demonstration examples."""
213+
demo_basic_metrics_tracking()
214+
demo_stale_cache_tracking()
215+
demo_metrics_sampling()
216+
demo_comprehensive_metrics()
217+
demo_programmatic_monitoring()
218+
219+
print("\n" + "=" * 60)
220+
print("Examples complete!")
221+
print("=" * 60)
222+
print("\nKey takeaways:")
223+
print(" - Metrics are opt-in via enable_metrics=True")
224+
print(" - Access metrics via function.metrics.get_stats()")
225+
print(" - Sampling reduces overhead for high-traffic functions")
226+
print(" - Metrics are thread-safe and backend-agnostic")
227+
print(" - Use for production monitoring and optimization")
228+
229+
230+
if __name__ == "__main__":
231+
main()

0 commit comments

Comments
 (0)