Efficiency in the Age of Large Scale Models - Designing and Optimizing Deep Learning Systems
HUJI Machine Learning Club via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on the evolution and efficiency challenges of large-scale machine learning models. Delve into both theoretical and practical aspects of model efficiency, from the rapid scaling of neural networks over the past decade to current challenges in computational costs and accessibility. Learn how architectural choices impact model expressiveness, discover domain-specific optimization strategies for NLP and Quantum Physics, and understand a groundbreaking incremental computation approach that achieves up to 100X reduction in computational costs for large language model inference. The speaker, Or Sharir, brings extensive expertise from his work at AI21 Labs, including development of a 178B-parameter language model, and his current research at Caltech focusing on quantum many-body problems and efficient model inference. Gain insights into addressing the tension between model performance and resource constraints in modern AI development.
Syllabus
Presented on Thursday, February 8th, 2024, AM, room C221
Taught by
HUJI Machine Learning Club