What Is ChatGPT Doing? Training Neural Networks - Episode 4
Overview
Explore the inner workings of large language models, particularly ChatGPT, in this 15-minute video from Wolfram. Delve into the training process of neural networks, examining topics such as layer modification during training, fine-tuning techniques, and reinforcement learning. Learn about training examples, output analysis, and parameter selection. Investigate how adjustments are made over time and what happens when improvements stagnate. Gain valuable insights into the mechanics behind ChatGPT's functionality and effectiveness through this informative conversation.
Syllabus
Intro
What Happens to a Neural Net While Training?
Can We Change the Layers While Training?
What about Fine-Tuning?
Reinforcement Learning
Training Examples
What Does the Output Look Like?
Further Investigation
How Do We Decide on Parameters? And How Do We Adjust That over Time?
What Happens if It Doesn't Improve over Time?
Taught by
Wolfram