Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing AI Inferencing with CXL Memory - Memory Tiering Strategies for Enhanced Performance

Open Compute Project via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how CXL-attached memory can revolutionize AI inference technology and enhance performance for Large Language Models (LLMs) in this 20-minute technical presentation from Astera Labs experts. Explore memory tiering strategies that optimize AI inference platforms, focusing on how Compute Express Link (CXL) technology enables improved performance, scalability, and cost-effectiveness for memory-intensive applications. Discover techniques for enhancing CPU and GPU utilization, minimizing latency, and increasing throughput when working with large datasets. Gain valuable insights into the emerging role of CXL memory architecture and its potential impact on advancing Generative AI capabilities.

Syllabus

Optimizing AI Inferencing with CXL Memory

Taught by

Open Compute Project

Reviews

Start your review of Optimizing AI Inferencing with CXL Memory - Memory Tiering Strategies for Enhanced Performance

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.