Releases: josepdcs/cachelito
v0.16.0
🪟 W-TinyLFU (Windowed Tiny LFU) Policy!
Version 0.16.0 introduces the W-TinyLFU eviction policy, a state-of-the-art cache replacement algorithm that delivers excellent hit rates:
Key Features:
-
🪟 W-TinyLFU Policy - Two-segment architecture (window + protected) for optimal caching
- Window segment (FIFO) captures recent items
- Protected segment (LFU) keeps frequently accessed items
- Configurable
window_ratiofor workload tuning
-
🎯 Superior Hit Rates - 5-15% better than traditional LRU on mixed workloads
-
🛡️ Cache Pollution Protection - Prevents one-hit wonders from evicting valuable data
-
⚙️ Configurable - Tune
window_ratioto emphasize recency vs frequency
Basic Example:
use cachelito::cache;
// Basic W-TinyLFU cache
#[cache(limit = 1000, policy = "w_tinylfu")]
fn fetch_user_data(user_id: u64) -> UserData {
database.fetch_user(user_id)
}
// Custom window ratio for recency emphasis
#[cache(
limit = 1000,
policy = "w_tinylfu",
window_ratio = 0.3 // 30% window, 70% protected
)]
fn fetch_trending_content(id: u64) -> Content {
api_client.fetch(id)
}How It Works:
W-TinyLFU splits the cache into two segments:
- Window (20% by default): Recent items using FIFO
- Protected (80%): Frequently accessed items using LFU
This dual-segment approach provides excellent performance across various workload patterns.
Configuration Options:
window_ratio(0.01-0.99, default: 0.20) - Balance between recency and frequency- Smaller (0.1-0.15): More emphasis on frequency (stable workloads)
- Larger (0.3-0.4): More emphasis on recency (changing workloads)
Current Status (v0.16.0):
This is the initial, fully functional implementation of W-TinyLFU. Future versions will add:
- Count-Min Sketch admission policy
- Automatic periodic decay
- Segment-specific metrics
Examples:
examples/w_tinylfu.rs- Complete demonstration with multiple scenariostests/w_tinylfu_policy_tests.rs- Test suite
What's Changed
Full Changelog: 0.15.0...0.16.0
v0.15.0
⏰ TLRU (Time-aware Least Recently Used) Policy!
Version 0.15.0 introduces the TLRU eviction policy, combining recency, frequency, and time-based factors for intelligent cache management:
New Features:
- ⏰ TLRU Policy - Time-aware LRU that considers age, frequency, and recency
- 🎯 Smart Eviction - Prioritizes entries approaching TTL expiration
- 📊 Triple Scoring - Combines
frequency × position_weight × age_factor - 🎚️ Frequency Weight - NEW:
frequency_weightparameter to fine-tune recency vs frequency balance - 🔄 Automatic Aging - Entries close to TTL get lower priority
- 💡 Backward Compatible - Without TTL, behaves like ARC
- ⚡ Optimal for Time-Sensitive Data - Perfect for caches with expiring content
Quick Start:
use cachelito::cache;
// Time-aware caching with TLRU
#[cache(policy = "tlru", limit = 100, ttl = 300)]
fn fetch_weather(city: String) -> WeatherData {
// Entries approaching 5-minute TTL are prioritized for eviction
fetch_from_api(city)
}
// TLRU without TTL behaves like ARC
#[cache(policy = "tlru", limit = 50)]
fn compute_expensive(n: u64) -> u64 {
// Considers both frequency and recency
expensive_calculation(n)
}
// NEW: Fine-tune with frequency_weight
#[cache(policy = "tlru", limit = 100, ttl = 300, frequency_weight = 1.5)]
fn fetch_popular_content(id: u64) -> Content {
// frequency_weight > 1.0 emphasizes frequency over recency
// Popular entries stay cached longer
database.fetch(id)
}How TLRU Works:
- Score Formula:
frequency^weight × position_weight × age_factor - Frequency Weight: Control balance between recency and frequency (default = 1.0)
< 1.0: Emphasize recency (good for time-sensitive data)> 1.0: Emphasize frequency (good for popular content)
- Age Factor: Decreases as entry approaches TTL expiration (0.0 = expired, 1.0 = fresh)
- Eviction: Lowest score = first to evict
- Best For: Time-sensitive data, mixed access patterns, expiring content
What's Changed
Full Changelog: 0.14.0...0.15.0
v0.14.0
🎯 Conditional Caching with cache_if!
Version 0.14.0 introduces conditional caching, giving you fine-grained control over when results should be cached based on custom predicates:
New Features:
- 🎯 Conditional Caching - Control caching with custom
cache_ifpredicates - 🚫 Error Filtering - Automatically skip caching errors (default for
Resulttypes) - 📊 Value-Based Caching - Cache only results meeting specific criteria (non-empty, valid, etc.)
- 💡 Smart Defaults -
Result<T, E>types only cacheOkvalues by default - 🔧 Custom Logic - Use any Rust logic in your cache predicates
- ⚡ Zero Overhead - No performance penalty when predicates return
true - 🔒 Type-Safe - Compile-time validation of predicate functions
Quick Start:
use cachelito::cache;
// Only cache non-empty results
fn should_cache(_key: &String, result: &Vec<String>) -> bool {
!result.is_empty()
}
#[cache(scope = "global", limit = 100, cache_if = should_cache)]
fn fetch_items(category: String) -> Vec<String> {
// Empty results won't be cached
database.query(category)
}
// Default behavior: Result types only cache Ok values
#[cache(scope = "global", limit = 50)]
fn validate_email(email: String) -> Result<String, String> {
if email.contains('@') {
Ok(format!("Valid: {}", email)) // ✅ Cached
} else {
Err(format!("Invalid: {}", email)) // ❌ NOT cached
}
}
// Custom predicate for Result types
fn cache_only_ok(_key: &String, result: &Result<User, Error>) -> bool {
result.is_ok()
}
#[cache(scope = "global", cache_if = cache_only_ok)]
fn fetch_user(id: u32) -> Result<User, Error> {
// Only successful results are cached
api_client.get_user(id)
}Common Use Cases:
- ✅ Don't cache empty collections
- ✅ Skip caching
Nonevalues - ✅ Only cache successful HTTP responses
- ✅ Filter out invalid or temporary data
- ✅ Cache based on value characteristics
See also: examples/conditional_caching.rs
What's Changed
Full Changelog: 0.13.0...0.14.0
v0.13.0
🎯 Conditional Invalidation with Custom Check Functions!
Version 0.13.0 introduces powerful conditional invalidation, allowing you to selectively invalidate cache entries based on runtime conditions:
New Features:
- 🎯 Conditional Invalidation - Invalidate entries matching custom check functions (predicates)
- 🌐 Global Conditional Invalidation Support - Apply check functions across all registered caches
- 🔑 Key-Based Filtering - Match entries by key patterns, ranges, or any custom logic
- 🏷️ Named Invalidation Check Functions - Automatic validation on every cache access with
invalidate_on = function_nameattribute - ⚡ Automatic Registration - All global-scope caches support conditional invalidation by default
- 🔒 Thread-Safe Execution - Safe concurrent check function execution
- 💡 Flexible Conditions - Use any Rust logic in your check functions
Quick Start:
use cachelito::{cache, invalidate_with, invalidate_all_with};
// Named invalidation check function (evaluated on every access)
fn is_stale(_key: &String, value: &User) -> bool {
value.updated_at.elapsed() > Duration::from_secs(3600)
}
#[cache(scope = "global", name = "get_user", invalidate_on = is_stale)]
fn get_user(user_id: u64) -> User {
fetch_user_from_db(user_id)
}
// Manual conditional invalidation
invalidate_with("get_user", |key| {
key.parse::<u64>().unwrap_or(0) > 1000
});
// Global invalidation across all caches
invalidate_all_with(|_cache_name, key| {
key.parse::<u64>().unwrap_or(0) >= 1000
});See also:
examples/conditional_invalidation.rs- Manual conditional invalidationexamples/named_invalidation.rs- Named invalidation check functions
What's Changed
Full Changelog: 0.12.0...0.13.0
v0.12.0
🔥 Smart Cache Invalidation!
Version 0.12.0 introduces intelligent cache invalidation mechanisms beyond simple TTL expiration:
New Features:
- 🏷️ Tag-Based Invalidation - Group related caches and invalidate them together
- 📡 Event-Driven Invalidation - Trigger invalidation when application events occur
- 🔗 Dependency-Based Invalidation - Cascade invalidation to dependent caches
- 🎯 Manual Invalidation - Invalidate specific caches by name
- 🔄 Flexible Combinations - Use tags, events, and dependencies together
- ⚡ Zero Overhead - No performance impact when not using invalidation
- 🔒 Thread-Safe - All operations are atomic and concurrent-safe
Quick Start:
use cachelito::{cache, invalidate_by_tag, invalidate_by_event};
// Tag-based grouping
#[cache(tags = ["user_data", "profile"], name = "get_user_profile")]
fn get_user_profile(user_id: u64) -> UserProfile {
fetch_from_db(user_id)
}
// Event-driven invalidation
#[cache(events = ["user_updated"], name = "get_user_settings")]
fn get_user_settings(user_id: u64) -> Settings {
fetch_settings(user_id)
}
// Invalidate all user_data caches
invalidate_by_tag("user_data");
// Invalidate on event
invalidate_by_event("user_updated");See also: examples/smart_invalidation.rs
What's Changed
- fix: polishing code by @josepdcs in #28
- Add intelligent cache invalidation strategies by @josepdcs in #30
Full Changelog: 0.11.0...0.12.0
v0.11.0
🎲 Random Replacement Policy!
Version 0.11.0 introduces the Random eviction policy for baseline benchmarking and simple use cases:
New Features:
- 🎲 Random Eviction Policy - Randomly evicts entries when cache is full
- ⚡ O(1) Performance - Constant-time eviction with no access tracking overhead
- 🔒 Thread-Safe RNG - Uses
fastrandfor fast, lock-free random selection - 📊 Minimal Overhead - No order updates on cache hits (unlike LRU/ARC)
- 🎯 Benchmark Baseline - Ideal for comparing policy effectiveness
- 🔄 All Cache Types - Available in sync (thread-local & global) and async caches
- 📚 Full Support - Works with
limit,ttl, andmax_memoryattributes
Quick Start:
// Simple random eviction - O(1) performance
#[cache(policy = "random", limit = 1000)]
fn baseline_cache(x: u64) -> u64 { x * x }
// Random with memory limit
#[cache(policy = "random", max_memory = "100MB")]
fn random_with_memory(key: String) -> Vec<u8> {
vec![0u8; 1024]
}When to Use Random:
- Baseline for performance benchmarks
- Truly random access patterns
- Simplicity preferred over optimization
- Reducing lock contention vs LRU/LFU
What's Changed
Full Changelog: 0.10.1...0.11.0
v0.10.1
- 📦 Version Unification: All crates now use version 0.10.1 for consistency
cachelito: 0.10.0 → 0.10.1cachelito-core: 0.10.0 → 0.10.1cachelito-macros: 0.10.0 → 0.10.1cachelito-macro-utils: 0.10.0 → 0.10.1cachelito-async: 0.2.0 → 0.10.1cachelito-async-macros: 0.2.0 → 0.10.1
Fixed
- 🔧 Async Cache Integration: Updated
cachelito-asyncandcachelito-async-macros- Async caches now properly support
max_memoryattribute insert_with_memory()method no longer requiresMemoryEstimatorwhenmax_memoryis not specified- Added protection against infinite eviction loops when value size exceeds
max_memory
- Async caches now properly support
Added
- 🛡️ Infinite Loop Protection: All caches (sync and async) now prevent infinite eviction loops
- When a value's memory size exceeds
max_memory, it's not cached (returns early) - Applies to:
ThreadLocalCache,GlobalCache, andAsyncGlobalCache - New test suite:
memory_limit_edge_cases_tests.rs(7 tests for sync caches) - New test suite:
memory_limit_edge_cases_async_tests.rs(7 tests for async cache)
- When a value's memory size exceeds
- 📝 Code Quality Improvements:
- Eliminated code duplication in
AsyncGlobalCacheby extracting helper methods:find_min_frequency_key()for LFU evictionfind_arc_eviction_key()for ARC eviction
- Consistent with sync cache implementations
- Eliminated code duplication in
What's Changed
Full Changelog: 0.10.0...0.10.1
v0.10.0
💾 Memory-Based Limits!
Version 0.10.0 introduces memory-aware caching controls:
New Features:
- 💾 Memory-Based Limits - Control cache size by memory footprint
- 📏
max_memoryAttribute - Specify memory limit (e.g.max_memory = "100MB") - 🔄 Combined Limits - Use both entry count and memory limits together
- ⚙️ Custom Memory Estimation - Implement
MemoryEstimatorfor precise control - 📊 Improved Statistics - Monitor memory usage and hit/miss rates together
Breaking Changes:
- Default policy remains LRU - No change, but now with memory limits!
- MemoryEstimator usage - Custom types with heap allocations must implement
MemoryEstimator
Quick Start:
// Memory limit - eviction when total size exceeds 100MB
#[cache(max_memory = "100MB")]
fn large_object(id: u32) -> Vec<u8> {
vec![0u8; 512 * 1024] // 512KB object
}
// Combined limits - max 500 entries OR 128MB
#[cache(limit = 500, max_memory = "128MB")]
fn compute(x: u64) -> u64 { x * x }What's Changed
Full Changelog: 0.9.0...0.10.0
v0.9.0
🎯 ARC - A simple approach for Adaptive Replacement Cache!
Version 0.9.0 introduces a self-tuning cache policy that automatically adapts to your workload:
New Features:
- 🎯 ARC Eviction Policy - Adaptive Replacement Cache that combines LRU and LFU
- 🧠 Self-Tuning - Automatically balances between recency and frequency
- 🛡️ Scan-Resistant - Protects frequently accessed items from sequential scans
What's Changed
Full Changelog: 0.8.0...0.9.0
v0.8.0
🔥 LFU Eviction Policy & LRU as Default!
Version 0.8.0 completes the eviction policy trio and improves defaults:
New Features:
- 🔥 LFU Eviction Policy - Least Frequently Used eviction strategy
- 📊 Frequency Tracking - Automatic access frequency counters for each cache entry
- 🎯 Three Policies - Choose between FIFO, LRU (default), and LFU
- 📈 Smart Eviction - LFU keeps frequently accessed items cached longer
- ⚡ Optimized Performance - O(1) cache hits for LFU, O(n) eviction
- 🔄 Both Sync & Async - LFU available in
cachelitoandcachelito-async
Breaking Change:
- Default policy changed from FIFO to LRU - LRU is more effective for most use cases. To keep FIFO behavior, explicitly use
policy = "fifo"
What's Changed
- Add LFU (Least Frequently Used) eviction policy support by @josepdcs in #18
- feat(GH-17): Added memory estimator by @josepdcs in #19
Full Changelog: 0.7.0...0.8.0