Overview
Syllabus
Intro
KVS: key pillar for distributed systems
Traditional KVS uses RPC (Server-centric)
Challenge: limited NIC abstraction
Existing systems adopt caching
High cache miss cost for caching tree Tree node size can be much larger than the KV
Trade-off of existing KVS
Overview of XSTORE Hybrid architecture 11
Our approach: Learned cache Using ML as the cache structure for tree-based index Motivated by the learned index[1]
Client-direct Get() using learned cache
Benefits of the learned cache
Challenges of learned cache
Outline of the remaining content Server-side data structure for dynamic workloads
Models cannot learn dynamic B+Tree address Can only learn when the addresses are sorted
Solution: another layer of indirection Observation: leaf nodes are logically sorted
Client-direct Get() using model & TT
Model retraining Model is retrained at server in background threads 9: Small cost & extra CPU usage at the server
Stale model handling Background update causes stale learned models
Performance of XSTORE on YCSB 100M KVS, uniform workloads
Sensitive to the dataset
Taught by
USENIX