Optical CXL for Disaggregated Compute Architectures in AI and LLM Processing
Open Compute Project via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how optical CXL technology is revolutionizing datacenter architectures for AI and Large Language Model (LLM) processing in this 13-minute technical presentation from Ron Swartzentruber, Director of Engineering at Lightelligence. Explore how CXL-capable processors, accelerators, switches, and memories can be integrated to build massive systems connecting compute arrays to extensive memory resources across multiple datacenter racks. Discover the critical role of memory bandwidth and latency in AI model training, and understand how CXL over optics addresses these challenges while enabling memory pooling for improved performance. Examine real-world latency improvements, distance advantages, and decode throughput results specifically demonstrated through LLM inference applications.
Syllabus
Optical CXL for disaggregated compute architectures
Taught by
Open Compute Project