Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Understanding How Llama 3.1 Works - A Technical Deep Dive

Oxen via YouTube

Overview

Explore a comprehensive technical video analysis of Meta's groundbreaking 92-page research paper on Llama 3 models, examining how they developed their most competitive open-source AI model to date. Delve into the three key improvement levers from Llama 2, understanding the distinctions between pre-training and post-training approaches. Learn about reward modeling, language model architecture, supervised fine-tuning (SFT) data preparation, and synthetic data quality assessment. Discover the enhanced capabilities and coding implementations that make Llama 3 stand out in the AI landscape. Through detailed chapter breakdowns, gain insights into the technical architecture, training methodologies, and practical applications of this advanced language model series. Perfect for AI researchers, developers, and enthusiasts seeking to understand the evolution and technical intricacies of state-of-the-art language models.

Syllabus

Intro
The Three Levers to Improve Llama 2
Pre-Training vs Post-Training
Post Training
Reward Model and Language Model
SFT Data
Synthetic Data Quality
Capabilities
Code

Taught by

Oxen

Reviews

Start your review of Understanding How Llama 3.1 Works - A Technical Deep Dive

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.