Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

ML Training Acceleration with Heterogeneous Resources in ByteDance

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore machine learning training acceleration techniques using heterogeneous resources at ByteDance in this 19-minute conference talk from KubeCon + CloudNativeCon Europe 2022. Delve into strategies for maximizing GPU utilization through sharing mechanisms, optimizing resource allocation with NUMA affinity, and implementing high-throughput network communication using RDMA CNI and Intel SRIOV technology. Gain insights into empowering model training, enhancing performance for large-scale distributed models, and effectively managing diverse CPU/GPU resources. Cover topics including GPU offline training for network and scheduling, GPU online serving, unified GPU scheduling, and future developments in the field.

Syllabus

Intro
GPU Offline Training (Network)
GPU Offline Training (Scheduling).
GPU Online Serving
GPU Unified Scheduling
Future Work

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of ML Training Acceleration with Heterogeneous Resources in ByteDance

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.