Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the optimization of large language model (LLM) data pipelines on the cloud through distributed caching in this 32-minute talk by Fu Zhengjia, Alluxio Open Source Evangelist. Learn about the challenges of LLM training, including resource-intensive processes and frequent I/O operations with small files. Discover how Alluxio's distributed cache architecture system addresses these issues, improving GPU utilization and resource efficiency. Examine the synergy between Alluxio and Spark for high-performance data processing in AI scenarios. Delve into the design and implementation of distributed cache systems, best practices for optimizing cloud-based data pipelines, and real-world applications at Microsoft, Tencent, and Zhihu. Gain insights into creating modern data platforms and leveraging scalable infrastructure for LLM training and inference.
Syllabus
Distributed Caching For Generative AI: Optimizing The Llm Data Pipeline On The Cloud
Taught by
The ASF