Completed
- The dangers of incentivizing economic growth
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?
Automatically move to the next video in the Classroom when playback concludes
- 1 - The potential consequences of an AI objective function
- 2 - Unintended consequences of an AGI system focused on minimizing human suffering
- 3 - The risks of implementing an AGI with the wrong objectives
- 4 - The inconsistency of GPT3
- 5 - The dangers of a superintelligence's objectives
- 6 - The dangers of superintelligence
- 7 - The risks of an AI with the objective to maximize future freedom of action for humans
- 8 - The risks of an AI with the objective function of "maximizing future freedom of action"
- 9 - The risks of an AI maximizing for geopolitical power
- 10 - The quest for geopolitical power leading to increased cyberattacks and warfare
- 11 - The potential consequences of implementing the proposed objective function
- 12 - The dangers of maximizing global GDP
- 13 - The dangers of incentivizing economic growth
- 14 - The dangers of focusing on GDP growth
- 15 - The objective function of a superintelligence
- 16 - The risks of an AGI minimizing human suffering
- 17 - The objective function of AGI systems
- 18 - The risks of an AI system that prioritizes human suffering
- 19 - The risks of creating a superintelligence focused on reducing suffering
- 20 - The problem with measuring human suffering
- 21 - The objective function of reducing suffering for all living things
- 22 - The dangers of an excessively altruistic superintelligence
- 23 - The risks of the proposed objective function
- 24 - The potential risks of an AI fixated on reducing suffering
- 25 - The risks of AGI with a bad objective function