Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore innovative strategies for optimizing machine learning serving performance in this 30-minute technical talk by Iaroslav Geraskin from TikTok. Dive into the concept of leveraging distributed caches to accelerate ML serving, focusing on caching frequently accessed model predictions and intermediate computations. Gain practical insights into reducing latency and improving throughput in ML inference pipelines. Examine cache design considerations, implementation best practices, and the challenges associated with incorporating distributed caches into ML serving architectures. Equip yourself with the knowledge and tools necessary to harness the full potential of distributed caching for accelerated ML serving. Suitable for technical practitioners, this talk was presented as part of the Data Science Festival MayDay event 2024.
Syllabus
Accelerating Machine Learning Serving with Distributed Caches
Taught by
Data Science Festival