Skip to content

Releases: josepdcs/cachelito

v0.16.0

08 Jan 20:07
0a027a1

Choose a tag to compare

🪟 W-TinyLFU (Windowed Tiny LFU) Policy!

Version 0.16.0 introduces the W-TinyLFU eviction policy, a state-of-the-art cache replacement algorithm that delivers excellent hit rates:

Key Features:

  • 🪟 W-TinyLFU Policy - Two-segment architecture (window + protected) for optimal caching

    • Window segment (FIFO) captures recent items
    • Protected segment (LFU) keeps frequently accessed items
    • Configurable window_ratio for workload tuning
  • 🎯 Superior Hit Rates - 5-15% better than traditional LRU on mixed workloads

  • 🛡️ Cache Pollution Protection - Prevents one-hit wonders from evicting valuable data

  • ⚙️ Configurable - Tune window_ratio to emphasize recency vs frequency

Basic Example:

use cachelito::cache;

// Basic W-TinyLFU cache
#[cache(limit = 1000, policy = "w_tinylfu")]
fn fetch_user_data(user_id: u64) -> UserData {
    database.fetch_user(user_id)
}

// Custom window ratio for recency emphasis
#[cache(
    limit = 1000,
    policy = "w_tinylfu",
    window_ratio = 0.3  // 30% window, 70% protected
)]
fn fetch_trending_content(id: u64) -> Content {
    api_client.fetch(id)
}

How It Works:

W-TinyLFU splits the cache into two segments:

  1. Window (20% by default): Recent items using FIFO
  2. Protected (80%): Frequently accessed items using LFU

This dual-segment approach provides excellent performance across various workload patterns.

Configuration Options:

  • window_ratio (0.01-0.99, default: 0.20) - Balance between recency and frequency
    • Smaller (0.1-0.15): More emphasis on frequency (stable workloads)
    • Larger (0.3-0.4): More emphasis on recency (changing workloads)

Current Status (v0.16.0):

This is the initial, fully functional implementation of W-TinyLFU. Future versions will add:

  • Count-Min Sketch admission policy
  • Automatic periodic decay
  • Segment-specific metrics

Examples:

What's Changed

  • Add W-TinyLFU (Windowed TinyLFU) policy support by @josepdcs in #37

Full Changelog: 0.15.0...0.16.0

v0.15.0

18 Dec 13:54
c7dced8

Choose a tag to compare

⏰ TLRU (Time-aware Least Recently Used) Policy!

Version 0.15.0 introduces the TLRU eviction policy, combining recency, frequency, and time-based factors for intelligent cache management:

New Features:

  • TLRU Policy - Time-aware LRU that considers age, frequency, and recency
  • 🎯 Smart Eviction - Prioritizes entries approaching TTL expiration
  • 📊 Triple Scoring - Combines frequency × position_weight × age_factor
  • 🎚️ Frequency Weight - NEW: frequency_weight parameter to fine-tune recency vs frequency balance
  • 🔄 Automatic Aging - Entries close to TTL get lower priority
  • 💡 Backward Compatible - Without TTL, behaves like ARC
  • Optimal for Time-Sensitive Data - Perfect for caches with expiring content

Quick Start:

use cachelito::cache;

// Time-aware caching with TLRU
#[cache(policy = "tlru", limit = 100, ttl = 300)]
fn fetch_weather(city: String) -> WeatherData {
    // Entries approaching 5-minute TTL are prioritized for eviction
    fetch_from_api(city)
}

// TLRU without TTL behaves like ARC
#[cache(policy = "tlru", limit = 50)]
fn compute_expensive(n: u64) -> u64 {
    // Considers both frequency and recency
    expensive_calculation(n)
}

// NEW: Fine-tune with frequency_weight
#[cache(policy = "tlru", limit = 100, ttl = 300, frequency_weight = 1.5)]
fn fetch_popular_content(id: u64) -> Content {
    // frequency_weight > 1.0 emphasizes frequency over recency
    // Popular entries stay cached longer
    database.fetch(id)
}

How TLRU Works:

  • Score Formula: frequency^weight × position_weight × age_factor
  • Frequency Weight: Control balance between recency and frequency (default = 1.0)
    • < 1.0: Emphasize recency (good for time-sensitive data)
    • > 1.0: Emphasize frequency (good for popular content)
  • Age Factor: Decreases as entry approaches TTL expiration (0.0 = expired, 1.0 = fresh)
  • Eviction: Lowest score = first to evict
  • Best For: Time-sensitive data, mixed access patterns, expiring content

What's Changed

Full Changelog: 0.14.0...0.15.0

v0.14.0

10 Dec 17:42
c0d858f

Choose a tag to compare

🎯 Conditional Caching with cache_if!

Version 0.14.0 introduces conditional caching, giving you fine-grained control over when results should be cached based on custom predicates:

New Features:

  • 🎯 Conditional Caching - Control caching with custom cache_if predicates
  • 🚫 Error Filtering - Automatically skip caching errors (default for Result types)
  • 📊 Value-Based Caching - Cache only results meeting specific criteria (non-empty, valid, etc.)
  • 💡 Smart Defaults - Result<T, E> types only cache Ok values by default
  • 🔧 Custom Logic - Use any Rust logic in your cache predicates
  • Zero Overhead - No performance penalty when predicates return true
  • 🔒 Type-Safe - Compile-time validation of predicate functions

Quick Start:

use cachelito::cache;

// Only cache non-empty results
fn should_cache(_key: &String, result: &Vec<String>) -> bool {
    !result.is_empty()
}

#[cache(scope = "global", limit = 100, cache_if = should_cache)]
fn fetch_items(category: String) -> Vec<String> {
    // Empty results won't be cached
    database.query(category)
}

// Default behavior: Result types only cache Ok values
#[cache(scope = "global", limit = 50)]
fn validate_email(email: String) -> Result<String, String> {
    if email.contains('@') {
        Ok(format!("Valid: {}", email))  // ✅ Cached
    } else {
        Err(format!("Invalid: {}", email))  // ❌ NOT cached
    }
}

// Custom predicate for Result types
fn cache_only_ok(_key: &String, result: &Result<User, Error>) -> bool {
    result.is_ok()
}

#[cache(scope = "global", cache_if = cache_only_ok)]
fn fetch_user(id: u32) -> Result<User, Error> {
    // Only successful results are cached
    api_client.get_user(id)
}

Common Use Cases:

  • ✅ Don't cache empty collections
  • ✅ Skip caching None values
  • ✅ Only cache successful HTTP responses
  • ✅ Filter out invalid or temporary data
  • ✅ Cache based on value characteristics

See also: examples/conditional_caching.rs

What's Changed

Full Changelog: 0.13.0...0.14.0

v0.13.0

03 Dec 10:39
871efc7

Choose a tag to compare

🎯 Conditional Invalidation with Custom Check Functions!

Version 0.13.0 introduces powerful conditional invalidation, allowing you to selectively invalidate cache entries based on runtime conditions:

New Features:

  • 🎯 Conditional Invalidation - Invalidate entries matching custom check functions (predicates)
  • 🌐 Global Conditional Invalidation Support - Apply check functions across all registered caches
  • 🔑 Key-Based Filtering - Match entries by key patterns, ranges, or any custom logic
  • 🏷️ Named Invalidation Check Functions - Automatic validation on every cache access with invalidate_on = function_name attribute
  • Automatic Registration - All global-scope caches support conditional invalidation by default
  • 🔒 Thread-Safe Execution - Safe concurrent check function execution
  • 💡 Flexible Conditions - Use any Rust logic in your check functions

Quick Start:

use cachelito::{cache, invalidate_with, invalidate_all_with};

// Named invalidation check function (evaluated on every access)
fn is_stale(_key: &String, value: &User) -> bool {
    value.updated_at.elapsed() > Duration::from_secs(3600)
}

#[cache(scope = "global", name = "get_user", invalidate_on = is_stale)]
fn get_user(user_id: u64) -> User {
    fetch_user_from_db(user_id)
}

// Manual conditional invalidation
invalidate_with("get_user", |key| {
    key.parse::<u64>().unwrap_or(0) > 1000
});

// Global invalidation across all caches
invalidate_all_with(|_cache_name, key| {
    key.parse::<u64>().unwrap_or(0) >= 1000
});

See also:

What's Changed

Full Changelog: 0.12.0...0.13.0

v0.12.0

28 Nov 12:23
a9aa94f

Choose a tag to compare

🔥 Smart Cache Invalidation!

Version 0.12.0 introduces intelligent cache invalidation mechanisms beyond simple TTL expiration:

New Features:

  • 🏷️ Tag-Based Invalidation - Group related caches and invalidate them together
  • 📡 Event-Driven Invalidation - Trigger invalidation when application events occur
  • 🔗 Dependency-Based Invalidation - Cascade invalidation to dependent caches
  • 🎯 Manual Invalidation - Invalidate specific caches by name
  • 🔄 Flexible Combinations - Use tags, events, and dependencies together
  • Zero Overhead - No performance impact when not using invalidation
  • 🔒 Thread-Safe - All operations are atomic and concurrent-safe

Quick Start:

use cachelito::{cache, invalidate_by_tag, invalidate_by_event};

// Tag-based grouping
#[cache(tags = ["user_data", "profile"], name = "get_user_profile")]
fn get_user_profile(user_id: u64) -> UserProfile {
    fetch_from_db(user_id)
}

// Event-driven invalidation
#[cache(events = ["user_updated"], name = "get_user_settings")]
fn get_user_settings(user_id: u64) -> Settings {
    fetch_settings(user_id)
}

// Invalidate all user_data caches
invalidate_by_tag("user_data");

// Invalidate on event
invalidate_by_event("user_updated");

See also: examples/smart_invalidation.rs

What's Changed

Full Changelog: 0.11.0...0.12.0

v0.11.0

26 Nov 11:31
ad0e6de

Choose a tag to compare

🎲 Random Replacement Policy!

Version 0.11.0 introduces the Random eviction policy for baseline benchmarking and simple use cases:

New Features:

  • 🎲 Random Eviction Policy - Randomly evicts entries when cache is full
  • O(1) Performance - Constant-time eviction with no access tracking overhead
  • 🔒 Thread-Safe RNG - Uses fastrand for fast, lock-free random selection
  • 📊 Minimal Overhead - No order updates on cache hits (unlike LRU/ARC)
  • 🎯 Benchmark Baseline - Ideal for comparing policy effectiveness
  • 🔄 All Cache Types - Available in sync (thread-local & global) and async caches
  • 📚 Full Support - Works with limit, ttl, and max_memory attributes

Quick Start:

// Simple random eviction - O(1) performance
#[cache(policy = "random", limit = 1000)]
fn baseline_cache(x: u64) -> u64 { x * x }

// Random with memory limit
#[cache(policy = "random", max_memory = "100MB")]
fn random_with_memory(key: String) -> Vec<u8> {
    vec![0u8; 1024]
}

When to Use Random:

  • Baseline for performance benchmarks
  • Truly random access patterns
  • Simplicity preferred over optimization
  • Reducing lock contention vs LRU/LFU

What's Changed

Full Changelog: 0.10.1...0.11.0

v0.10.1

21 Nov 19:24
b109fb7

Choose a tag to compare

  • 📦 Version Unification: All crates now use version 0.10.1 for consistency
    • cachelito: 0.10.0 → 0.10.1
    • cachelito-core: 0.10.0 → 0.10.1
    • cachelito-macros: 0.10.0 → 0.10.1
    • cachelito-macro-utils: 0.10.0 → 0.10.1
    • cachelito-async: 0.2.0 → 0.10.1
    • cachelito-async-macros: 0.2.0 → 0.10.1

Fixed

  • 🔧 Async Cache Integration: Updated cachelito-async and cachelito-async-macros
    • Async caches now properly support max_memory attribute
    • insert_with_memory() method no longer requires MemoryEstimator when max_memory is not specified
    • Added protection against infinite eviction loops when value size exceeds max_memory

Added

  • 🛡️ Infinite Loop Protection: All caches (sync and async) now prevent infinite eviction loops
    • When a value's memory size exceeds max_memory, it's not cached (returns early)
    • Applies to: ThreadLocalCache, GlobalCache, and AsyncGlobalCache
    • New test suite: memory_limit_edge_cases_tests.rs (7 tests for sync caches)
    • New test suite: memory_limit_edge_cases_async_tests.rs (7 tests for async cache)
  • 📝 Code Quality Improvements:
    • Eliminated code duplication in AsyncGlobalCache by extracting helper methods:
      • find_min_frequency_key() for LFU eviction
      • find_arc_eviction_key() for ARC eviction
    • Consistent with sync cache implementations

What's Changed

Full Changelog: 0.10.0...0.10.1

v0.10.0

21 Nov 19:10
eaff430

Choose a tag to compare

💾 Memory-Based Limits!

Version 0.10.0 introduces memory-aware caching controls:

New Features:

  • 💾 Memory-Based Limits - Control cache size by memory footprint
  • 📏 max_memory Attribute - Specify memory limit (e.g. max_memory = "100MB")
  • 🔄 Combined Limits - Use both entry count and memory limits together
  • ⚙️ Custom Memory Estimation - Implement MemoryEstimator for precise control
  • 📊 Improved Statistics - Monitor memory usage and hit/miss rates together

Breaking Changes:

  • Default policy remains LRU - No change, but now with memory limits!
  • MemoryEstimator usage - Custom types with heap allocations must implement MemoryEstimator

Quick Start:

// Memory limit - eviction when total size exceeds 100MB
#[cache(max_memory = "100MB")]
fn large_object(id: u32) -> Vec<u8> {
    vec![0u8; 512 * 1024] // 512KB object
}

// Combined limits - max 500 entries OR 128MB
#[cache(limit = 500, max_memory = "128MB")]
fn compute(x: u64) -> u64 { x * x }

What's Changed

Full Changelog: 0.9.0...0.10.0

v0.9.0

17 Nov 17:29
3dcaf91

Choose a tag to compare

🎯 ARC - A simple approach for Adaptive Replacement Cache!

Version 0.9.0 introduces a self-tuning cache policy that automatically adapts to your workload:

New Features:

  • 🎯 ARC Eviction Policy - Adaptive Replacement Cache that combines LRU and LFU
  • 🧠 Self-Tuning - Automatically balances between recency and frequency
  • 🛡️ Scan-Resistant - Protects frequently accessed items from sequential scans

What's Changed

  • Add ARC (Adaptive Replacement Cache) policy support by @josepdcs in #24

Full Changelog: 0.8.0...0.9.0

v0.8.0

14 Nov 17:21
edb6074

Choose a tag to compare

🔥 LFU Eviction Policy & LRU as Default!

Version 0.8.0 completes the eviction policy trio and improves defaults:

New Features:

  • 🔥 LFU Eviction Policy - Least Frequently Used eviction strategy
  • 📊 Frequency Tracking - Automatic access frequency counters for each cache entry
  • 🎯 Three Policies - Choose between FIFO, LRU (default), and LFU
  • 📈 Smart Eviction - LFU keeps frequently accessed items cached longer
  • Optimized Performance - O(1) cache hits for LFU, O(n) eviction
  • 🔄 Both Sync & Async - LFU available in cachelito and cachelito-async

Breaking Change:

  • Default policy changed from FIFO to LRU - LRU is more effective for most use cases. To keep FIFO behavior, explicitly use policy = "fifo"

What's Changed

Full Changelog: 0.7.0...0.8.0