Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?
David Shapiro ~ AI via YouTube
Overview
Syllabus
- The potential consequences of an AI objective function
- Unintended consequences of an AGI system focused on minimizing human suffering
- The risks of implementing an AGI with the wrong objectives
- The inconsistency of GPT3
- The dangers of a superintelligence's objectives
- The dangers of superintelligence
- The risks of an AI with the objective to maximize future freedom of action for humans
- The risks of an AI with the objective function of "maximizing future freedom of action"
- The risks of an AI maximizing for geopolitical power
- The quest for geopolitical power leading to increased cyberattacks and warfare
- The potential consequences of implementing the proposed objective function
- The dangers of maximizing global GDP
- The dangers of incentivizing economic growth
- The dangers of focusing on GDP growth
- The objective function of a superintelligence
- The risks of an AGI minimizing human suffering
- The objective function of AGI systems
- The risks of an AI system that prioritizes human suffering
- The risks of creating a superintelligence focused on reducing suffering
- The problem with measuring human suffering
- The objective function of reducing suffering for all living things
- The dangers of an excessively altruistic superintelligence
- The risks of the proposed objective function
- The potential risks of an AI fixated on reducing suffering
- The risks of AGI with a bad objective function
Taught by
David Shapiro ~ AI