Overview
Explore recent advances in foundation models through this 36-minute conference talk by Dr. Irina Rish. Delve into scaling laws, emergent behaviors, and AI democratization, focusing on large-scale, self-supervised pre-trained models like GPT-3, GPT-4, and ChatGPT. Discover how neural scaling laws impact model performance prediction and AI safety. Learn about collaborative efforts by universities and non-profits to make cutting-edge AI technology accessible. Examine challenges and solutions in obtaining large compute resources for academic and non-profit AI research, emphasizing the importance of open-source foundation models. Topics covered include building AGI, generalization in AI, the neural scaling revolution, successes of large-scale models, history of neural scaling laws, and future directions in the field.
Syllabus
- Intro
- Building AGI as an Ultimate Goal of AI Field
- It’s All About Further Generalization
- Neural Scaling Revolution
- Successes of Large-Scale Models
- History of Neural Scaling Laws
- What’s Next?
Taught by
Open Data Science