Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Designing Cloud Storage for LLMs and Data-Intensive Workloads

Databricks via YouTube

Overview

Explore cloud storage optimization for large language models (LLMs) and data-intensive workloads in this 15-minute conference talk sponsored by Google. Discover how to design storage systems that maximize bandwidth for training, serving, and fine-tuning LLMs, keeping GPUs and TPUs operating at peak efficiency. Learn to build scalable AI/ML data pipelines and select the ideal combination of block, file, and object storage solutions for various use cases. Gain insights into optimizing AI/ML workloads, including data preparation, training, tuning, inference, and serving, using Databricks deployed on Google Kubernetes Engine, Vertex workflows, or Compute Engine. Delve into strategies for enhancing analytics workloads with Cloud Storage and Anywhere Cache. Presented by Sridevi Ravuri, Sr. Director of R&D at Google, this talk is essential for AI/ML and data practitioners aiming to improve their storage infrastructure for cutting-edge machine learning applications.

Syllabus

Sponsored by: Google | Designing Cloud Storage for LLMs and Data-Intensive Workloads

Taught by

Databricks

Reviews

Start your review of Designing Cloud Storage for LLMs and Data-Intensive Workloads

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.