Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Enabling Composable Scalable Memory for AI Inference with CXL Switch

Open Compute Project via YouTube

Overview

Learn how CXL 2.0 switch technology enables composable and scalable memory systems for AI inference workloads in this technical presentation from Xconn Technologies and H3 Platform executives. Explore the architecture, configuration, and components of a real composable memory system designed to address the substantial memory demands of Large Language Models (LLM). Discover the working mechanisms behind CXL 2.0-based systems becoming available in 2024, examine their performance characteristics, and understand how these systems enhance AI inference performance through practical demonstrations and architectural insights.

Syllabus

Enabling Composable Scalable Memory for AI Inference with CXL Switch

Taught by

Open Compute Project

Reviews

Start your review of Enabling Composable Scalable Memory for AI Inference with CXL Switch

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.