Overview
Explore an in-depth analysis of Meta AI's LLaMA, a series of large language models ranging from 7B to 65B parameters. Dive into the technical aspects of this groundbreaking research, including training data, hyperparameters, architecture modifications, and efficient implementation. Learn how LLaMA outperforms GPT-3 on most benchmarks despite being significantly smaller. Understand the implications of open-sourcing such models for the research community and the potential impact on the field of artificial intelligence. Gain insights into the main results, model completions, and the future of foundation language models.
Syllabus
- Introduction & Paper Overview
- Rant on Open-Sourcing
- Training Data
- Training Hyperparameters
- Architecture Modifications
- Optimizer
- Efficient Implementation
- Main Results
- Some more completions
- Conclusion
Taught by
Yannic Kilcher
Reviews
5.0 rating, based on 1 Class Central review
Showing Class Central Sort
-
I really liked the course, the explanations, the rant and amusing stuff, but especially the seamless transitions between the different parts of the course (and no ads as youtube)