Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Overcoming Challenges in Serving Large Language Models - SREcon23 Europe/Middle East/Africa

USENIX via YouTube

Overview

Explore the intricacies of hosting GPT-type models in a Kubernetes cluster with multi-GPU nodes in this 31-minute conference talk from SREcon23 Europe/Middle East/Africa. Delve into the challenges SREs face when providing custom GPT model capabilities within organizations, including managing large model sizes, implementing GPU sharding, and utilizing tensor parallelism. Learn about various model file formats, quantization techniques, and the benefits of open-source tools like Huggingface Accelerate. Gain valuable insights into balancing serving latency, prediction accuracy, and distributed serving, while discovering best practices for optimizing resource allocation. Watch a live demonstration showcasing the performance and trade-offs of a GPT-based model, equipping you with practical knowledge to effectively host and manage large language models in your own environment.

Syllabus

SREcon23 Europe/Middle East/Africa - Overcoming Challenges in Serving Large Language Model

Taught by

USENIX

Reviews

Start your review of Overcoming Challenges in Serving Large Language Models - SREcon23 Europe/Middle East/Africa

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.