Completed
- Alignment experiment 2
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
AGI Alignment Experiments - INSTRUCT vs Foundation and Agent Models
Automatically move to the next video in the Classroom when playback concludes
- 1 - The importance of interpretability in AI alignment
- 2 - Alignment as a system, not a single model
- 3 - The importance of testing intelligence in complex systems
- 4 - The Right Research
- 5 - Self-stabilizing systems
- 6 - Alignment experiment 2
- 7 - The second agent model
- 8 - Alignment research with existing technology
- 9 - Alignment research on superintelligence
- 10 - The dangers of nanotechnology and genetic alteration
- 11 - The dangers of an AI with no hard goals
- 12 - The instability of a simple for loop
- 13 - The process of creating a machine that can write novels
- 14 - The stability of instruct models
- 15 - The stability of agent models