Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Service Revolution Through Memory Computing Fusion Technology - From Datacenter to On-Device AI

Open Compute Project via YouTube

Overview

Watch a technical presentation from SK Hynix Fellow Euicheol Lim exploring how Processing in Memory (PiM) technology can revolutionize Large Language Model (LLM) services. Learn about SK hynix's AiM device and AiMX accelerator prototype that address the growing challenges of GPU computing costs and efficiency in AI applications. Discover how this memory computing fusion technology achieves high bandwidth and energy efficiency for both datacenter and on-device LLM services. Examine the implementation of multi-batch operations with larger models and understand how AiM solutions can significantly reduce operational costs compared to traditional GPU setups while maintaining high performance standards.

Syllabus

LLM service revolution through memory computing fusion technology from Datacenter to on devi

Taught by

Open Compute Project

Reviews

Start your review of LLM Service Revolution Through Memory Computing Fusion Technology - From Datacenter to On-Device AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.