Cloud-Native AI: Wasm in Portable, Secure AI/ML Workloads
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Explore a conference talk that delves into WebAssembly (Wasm) as a groundbreaking solution for executing AI/ML workloads in cloud-native environments. Learn how Wasm enables the deployment of AI models like Llama3, Grok by X, and Mixtral across various cloud and edge platforms while maintaining optimal performance. Discover the benefits of combining Rust and WebAssembly for AI/ML applications, with particular emphasis on portability, speed, and security features. Through practical demonstrations, examine the implementation of AI inference models using Wasm runtime in Kubernetes environments, showcasing effective orchestration and execution across different devices. Designed for cloud-native practitioners and AI/ML enthusiasts, gain valuable insights into innovative approaches for deploying AI solutions in modern cloud architectures.
Syllabus
Cloud-Native AI: Wasm in Portable, Secure AI/ML Workloads - Miley Fu, Second State
Taught by
CNCF [Cloud Native Computing Foundation]