Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Exploring Distributed Caching for Faster GPU Training with NVMe, GDS, and RDMA

Linux Foundation via YouTube

Overview

Learn how to optimize GPU training performance through distributed caching in this technical conference talk. Discover strategies for addressing GPU underutilization caused by compute-storage separation while leveraging modern hardware components like NVMe storage and RDMA networks. Explore experimental results from implementing a Kubernetes-native distributed caching layer that enhances PyTorch training efficiency using NVMe storage and high-speed RDMA networks (InfiniBand or specialized NICs). Gain insights into building balanced architectures that maximize GPU utilization and accelerate deep learning workflows through effective data access optimization techniques.

Syllabus

Exploring Distributed Caching for Faster GPU Training with NVMe, GDS, and RDMA - Hope Wang & Bin Fan

Taught by

Linux Foundation

Reviews

Start your review of Exploring Distributed Caching for Faster GPU Training with NVMe, GDS, and RDMA

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.