Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Accelerating GPU Server Access to Network-Attached Disaggregated Storage Using DPU

SNIAVideo via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 45-minute conference talk from SNIA SDC 2024 that delves into optimizing GPU server access to network-attached disaggregated storage through DPU technology. Learn about the transformative impact of AI on data center storage architectures and the challenges faced by CPU-centric servers in handling massive data processing requirements. Discover how AMD and MangoBoost address performance bottlenecks through DPU acceleration of infrastructure functions, including networking, storage, and GPU peer-to-peer communications. Gain insights into AI trends like Large Language Models (LLMs), examine AMD's GPU systems, and understand DPU's role in enhancing storage server access efficiency. Study real-world case studies featuring AMD MI300X GPU servers utilizing open-source ROCm software and MangoBoost DPU's GPU-storage-boost technology, demonstrating improved performance through NVME-over-TCP and peer-to-peer communications. Master the latest developments in AI infrastructure optimization while understanding how DPUs can enhance GPU efficiency and accelerate AI model learning processes.

Syllabus

SNIA SDC 2024 - Accelerating GPU Server Access to Network-Attached Disaggregated Storage Using DPU

Taught by

SNIAVideo

Reviews

Start your review of Accelerating GPU Server Access to Network-Attached Disaggregated Storage Using DPU

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.