Completed
- Clever experiments show the importance of MLPs
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
ROME - Locating and Editing Factual Associations in GPT - Paper Explained & Author Interview
Automatically move to the next video in the Classroom when playback concludes
- 1 - Introduction
- 2 - What are the main questions in this subfield?
- 3 - How causal tracing reveals where facts are stored
- 4 - Clever experiments show the importance of MLPs
- 5 - How do MLPs store information?
- 6 - How to edit language model knowledge with precision?
- 7 - What does it mean to know something?
- 8 - Experimental Evaluation & the CounterFact benchmark
- 9 - How to obtain the required latent representations?
- 10 - Where is the best location in the model to perform edits?
- 11 - What do these models understand about language?
- 12 - Questions for the community