Explore the inner workings of Large Language Models (LLMs) in this 23-minute video from 3Blue1Brown. Unpack the multilayer perceptrons in a transformer and discover how they may store facts. Dive into a quick refresher on transformers, examine assumptions for a toy example, and delve inside a multilayer perceptron. Learn about parameter counting and the concept of superposition in neural networks. Gain insights from AI alignment research and mechanistic interpretability studies, with links to additional resources for further exploration. Perfect for those interested in understanding the technical aspects of how LLMs process and store information.
Overview
Syllabus
- Where facts in LLMs live
- Quick refresher on transformers
- Assumptions for our toy example
- Inside a multilayer perceptron
- Counting parameters
- Superposition
- Up next
Taught by
3Blue1Brown