Completed
High cache miss cost for caching tree Tree node size can be much larger than the KV
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Fast RDMA-based Ordered Key-Value Store Using Remote Learned Cache
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 KVS: key pillar for distributed systems
- 3 Traditional KVS uses RPC (Server-centric)
- 4 Challenge: limited NIC abstraction
- 5 Existing systems adopt caching
- 6 High cache miss cost for caching tree Tree node size can be much larger than the KV
- 7 Trade-off of existing KVS
- 8 Overview of XSTORE Hybrid architecture 11
- 9 Our approach: Learned cache Using ML as the cache structure for tree-based index Motivated by the learned index[1]
- 10 Client-direct Get() using learned cache
- 11 Benefits of the learned cache
- 12 Challenges of learned cache
- 13 Outline of the remaining content Server-side data structure for dynamic workloads
- 14 Models cannot learn dynamic B+Tree address Can only learn when the addresses are sorted
- 15 Solution: another layer of indirection Observation: leaf nodes are logically sorted
- 16 Client-direct Get() using model & TT
- 17 Model retraining Model is retrained at server in background threads 9: Small cost & extra CPU usage at the server
- 18 Stale model handling Background update causes stale learned models
- 19 Performance of XSTORE on YCSB 100M KVS, uniform workloads
- 20 Sensitive to the dataset