Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Accelerating Machine Learning Serving with Distributed Caches

Data Science Festival via YouTube

Overview

Explore innovative strategies for optimizing machine learning serving performance in this 30-minute technical talk by Iaroslav Geraskin from TikTok. Dive into the concept of leveraging distributed caches to accelerate ML serving, focusing on caching frequently accessed model predictions and intermediate computations. Gain practical insights into reducing latency and improving throughput in ML inference pipelines. Examine cache design considerations, implementation best practices, and the challenges associated with incorporating distributed caches into ML serving architectures. Equip yourself with the knowledge and tools necessary to harness the full potential of distributed caching for accelerated ML serving. Suitable for technical practitioners, this talk was presented as part of the Data Science Festival MayDay event 2024.

Syllabus

Accelerating Machine Learning Serving with Distributed Caches

Taught by

Data Science Festival

Reviews

Start your review of Accelerating Machine Learning Serving with Distributed Caches

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.