Explore advanced caching strategies in this 38-minute conference talk from Strange Loop. Dive into the world of efficient data serving and learn how to balance availability with correctness in complex distributed systems. Discover the challenges of cross-data-center consistency, unreliable data stores, and race conditions in request paths. Begin with an overview of naive caching strategies, their benefits, and potential pitfalls, illustrated with real-world production examples from services-based architectures. Delve into the innovative concept of dynamically scaled TTLs as a defense against inconsistent cached data. Learn how scaling cache TTLs based on confidence values can significantly reduce cache inconsistency rates, even when dealing with slow backing stores and competing cross-data-center writes. Examine the abstractions that simplify implementation for service developers. Gain insights from the experiences of Twitter's User Service team, which handles millions of operations per second for user data read and write traffic. Whether you're an experienced developer fine-tuning your caching strategy or just starting to explore caching improvements, acquire immediately applicable practices to enhance your system's performance and reliability.
Overview
Syllabus
"Lazy Defenses: Using Scaled TTLs to Keep Your Cache Correct" by Bonnie Eisenman
Taught by
Strange Loop Conference