Explore a 49-minute session featuring Yutao Sun from Tsinghua University, co-author of the paper "You Only Cache Once: Decoder-Decoder Architectures for Language Models". Delve into YOCO, a decoder-decoder architecture for Large Language Models that enhances inference memory, prefill latency, and throughput across various context lengths and model sizes by caching key-value pairs only once. Gain insights into this innovative approach and its potential impact on AI development. Discover additional resources including The Deep Dive newsletter for the latest AI research and industry trends, and Unify's blog for in-depth exploration of the AI deployment stack. Connect with Unify through their website, GitHub, Discord, Twitter, and Reddit to stay updated on cutting-edge AI advancements and join the community discussion.
Overview
Syllabus
YOCO Explained
Taught by
Unify