Cache Eviction: LRU, LFU & W-TinyLFU
Published: Sun Feb 15 2026 | Modified: Sat Feb 07 2026 , 2 minutes reading.
Cache Eviction Algorithms
Introduction: The âFull Suitcaseâ Problem
Memory is expensive. You canât store everything in your cache (Redis, Memcached, or local RAM). When your âsuitcaseâ is full and a new item arrives, you must decide which old item to throw away.
Cache Eviction is the algorithm that makes this decision. A good algorithm keeps the âHot Dataâ (what youâll need soon) and discards the âCold Data.â
Core Algorithms
1. LRU (Least Recently Used)
- Logic: Throw away the item that hasnât been used for the longest time.
- How it works: A Hash Map combined with a Doubly Linked List. Every time an item is accessed, move it to the front. The item at the tail is the victim.
- Pros: Very simple, works well for most ârecency-basedâ workloads.
- Cons: Vulnerable to âSparse Scans.â If a script reads 1 million old items once, it will flush out all your frequent hot data.
2. LFU (Least Frequently Used)
- Logic: Throw away the item with the lowest access count.
- How it works: Keeps a counter for every item.
- Pros: Good for âfrequency-basedâ workloads (e.g., a static homepage that is always popular).
- Cons: Hard to implement efficiently ( requires complex bucketing). Also, items that were popular 1 hour ago but are dead now might stay in the cache forever (Stale Data).
3. W-TinyLFU (Window TinyLFU)
- Logic: A hybrid approach using a tiny âAdmission Windowâ and a Bloom-Filter-like sketch to track frequency.
- The Hero: This is the algorithm used by Caffeine (Java) and Ristretto (Go). It provides near-perfect hit rates and is resistant to sparse scans.
Typical Business Scenarios
- â Session Store: Use LRU. If a user hasnât touched their session in 30 minutes, itâs safe to evict.
- â Product Catalog: Use W-TinyLFU. Some products are always popular (frequency), others are trending right now (recency).
- â Database Buffer Pool: MySQL and Postgres use modified versions of LRU (like LRU-2) to prevent full-table scans from ruining the cache.
Performance & Complexity
- LRU: for both
getandput. - W-TinyLFU: but requires more complex logic and a frequency sketch (Count-Min Sketch).
Summary
"LRU is the industry default because it's simple and fast. But if your system faces unpredictable traffic spikes or large scans, upgrade to W-TinyLFU to keep your cache hit rate high."
