Watch a technical presentation from SK Hynix Fellow Euicheol Lim exploring how Processing in Memory (PiM) technology can revolutionize Large Language Model (LLM) services. Learn about SK hynix's AiM device and AiMX accelerator prototype that address the growing challenges of GPU computing costs and efficiency in AI applications. Discover how this memory computing fusion technology achieves high bandwidth and energy efficiency for both datacenter and on-device LLM services. Examine the implementation of multi-batch operations with larger models and understand how AiM solutions can significantly reduce operational costs compared to traditional GPU setups while maintaining high performance standards.
LLM Service Revolution Through Memory Computing Fusion Technology - From Datacenter to On-Device AI
Open Compute Project via YouTube
Overview
Syllabus
LLM service revolution through memory computing fusion technology from Datacenter to on devi
Taught by
Open Compute Project