Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a technical conference talk examining how CXL technology enables new memory architectures for handling massive AI datasets. Learn about the challenges of data movement in AI training and inferencing at terabyte/petabyte scales, focusing on performance and energy costs. Discover how CXL technology creates a paradigm shift by enabling memory disaggregation and hardware coherent memory semantics through PCIe-compliant serial interfaces. Understand the implications of headless NUMA nodes and how this new scalable memory subsystem architecture moves beyond traditional CPU-attached memory. Presented by Rambus experts Danny Moore and Taekang Song, delve into heterogeneous and disaggregated memory approaches while examining potential future applications of this transformative technology.
Syllabus
Memory Scalability in AI co-design
Taught by
Open Compute Project