Overview
Explore an 18-minute technical video that delves into the evolution and architecture of GPT models, focusing on how OpenAI developed ChatGPT using the decoder component of a Transformer. Learn about the fundamentals of Generative Pre-Trained Transformers, their instruction fine-tuning process, and the implementation of Reinforcement Learning from Human Feedback (RLHF). Starting with a decoder recap, progress through key concepts including next-word prediction, the development trajectory from GPT-1 to GPT-3, emergent abilities, and various forms of in-context learning. Examine the technical aspects of instruction fine-tuning with InstructGPT and understand how RLHF works to align language models. Conclude with a comprehensive overview of major LLM players including OpenAI, Google, Anthropic, Meta, Microsoft, and Mistral. Access supplementary materials including a detailed mindmap and referenced academic papers to enhance understanding of these advanced machine learning concepts.
Syllabus
- Decoder Recap
- Generative Pre-Trained Transformer
- Next-Word Prediction
- GPT-1, GPT-2, GPT-3
- Emergent Abilities
- In-Context Learning zero, one, few shot
- Intruction Fine-Tuning InstructGPT
- RLHF Reinforcement Learning from Human Feedback
- Map of LLMs OpenAiI, Google, Anthropic, Meta, Microsoft, Mistral
Taught by
Donato Capitella