Overview
Watch a detailed technical presentation where Google Cloud's Product Manager Victor Moreno explains the sophisticated network infrastructure developed to support AI and machine learning workloads. Dive deep into the architecture of Google Cloud's dual-network system, designed to handle the exponential growth of AI/ML models requiring massive data transfers across thousands of nodes. Learn about the software-defined network (SDN) with hardware acceleration that enables GPUs and TPUs to communicate at line rates, addressing challenges in load balancing and data center topology. Explore the specialized GPU-to-GPU communication network built for high bandwidth and low latency, utilizing electrical and optical switching for flexible topology reconfiguration. Understand the secondary network dedicated to external storage connections and data source access, crucial for efficient training snapshot storage. Discover advanced load balancing techniques, including the Open Request Cost Aggregation (ORCA) framework, that optimize AI model response times through custom metrics like queue depth. See how these capabilities integrate with Vertex AI service to provide scalable, efficient infrastructure that automatically adapts to workload demands. Recorded live at Google Cloud's Sunnyvale campus, this presentation offers valuable insights into the cutting-edge network infrastructure powering modern AI/ML applications.
Syllabus
Google Cloud Network Infrastructure for AI/ML
Taught by
Tech Field Day