Caching strategies determine how data flows between cache and underlying storage, optimizing for different tradeoffs between consistency, performance, and complexity. Cache-aside (lazy loading) requires application code to manage the cache explicitly: check cache first, on miss fetch from database, then populate cache. Simple but error-prone, developers must remember to invalidate cached data when the source changes. Read-through cache handles reads automatically: misses trigger database fetches that populate the cache transparently to the application. Write-through writes to both cache and database synchronously, ensuring consistency but adding latency to every write. Write-behind (write-back) writes to cache immediately and asynchronously flushes to database, improving write performance but risking data loss if the cache fails before flushing. Eviction policies govern what to remove when cache fills up. LRU (Least Recently Used) evicts items not accessed recently, working well for access patterns with temporal locality. LFU (Least Frequently Used) evicts items accessed fewest times overall, better for stable popularity distributions. Time-based eviction (TTL) removes items after a fixed duration regardless of access patterns, essential for data that changes externally. Choosing strategies requires understanding access patterns, consistency requirements, and failure modes. Financial systems use write-through for durability. Content delivery uses TTL-based caching. Recommendation engines use LRU for personalized content. Hybrid approaches combine strategies for different data types.
Back to Glossary