What Is ChatGPT Doing? Understanding Large Language Models - Episode 2
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the inner workings of large language models, particularly ChatGPT, in this 20-minute video from Wolfram. Delve into the process of how ChatGPT generates text one word at a time, understand the model's size and complexity, and examine different prompts and their effects. Learn about token storage, word probability distribution, sentence construction, and potential limitations of the model. Gain insights into the technical aspects of AI language processing through this informative discussion, which covers topics such as model architecture, token handling, and the challenges of maintaining coherence in longer outputs.
Syllabus
Intro
It’s Just Adding One Word at a Time
How Big Is the Model?
Let's Try a Different Prompt
Where Do the Tokens Get Stored?
What about the Other Word Probabilities?
How Can You Build Larger Sentences?
Why Does It Seem to Get Stuck?
Taught by
Wolfram