Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?

David Shapiro ~ AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the complex landscape of AI alignment and potential risks in this 48-minute video comparing GPT-3 and GPT-NeoX's understanding of AGI alignment. Delve into the unintended consequences of various AI objective functions, including minimizing human suffering, maximizing future freedom of action, and pursuing geopolitical power. Examine the dangers of superintelligence, the inconsistencies in language models, and the challenges of measuring human suffering. Analyze the risks associated with focusing on GDP growth, creating excessively altruistic AI systems, and implementing poorly defined objective functions. Gain insights into the critical importance of carefully designing AGI systems to avoid catastrophic outcomes and ensure beneficial artificial intelligence development.

Syllabus

- The potential consequences of an AI objective function
- Unintended consequences of an AGI system focused on minimizing human suffering
- The risks of implementing an AGI with the wrong objectives
- The inconsistency of GPT3
- The dangers of a superintelligence's objectives
- The dangers of superintelligence
- The risks of an AI with the objective to maximize future freedom of action for humans
- The risks of an AI with the objective function of "maximizing future freedom of action"
- The risks of an AI maximizing for geopolitical power
- The quest for geopolitical power leading to increased cyberattacks and warfare
- The potential consequences of implementing the proposed objective function
- The dangers of maximizing global GDP
- The dangers of incentivizing economic growth
- The dangers of focusing on GDP growth
- The objective function of a superintelligence
- The risks of an AGI minimizing human suffering
- The objective function of AGI systems
- The risks of an AI system that prioritizes human suffering
- The risks of creating a superintelligence focused on reducing suffering
- The problem with measuring human suffering
- The objective function of reducing suffering for all living things
- The dangers of an excessively altruistic superintelligence
- The risks of the proposed objective function
- The potential risks of an AI fixated on reducing suffering
- The risks of AGI with a bad objective function

Taught by

David Shapiro ~ AI

Reviews

Start your review of Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.