Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?

Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?

David Shapiro ~ AI via YouTube Direct link

- The objective function of reducing suffering for all living things

21 of 25

21 of 25

- The objective function of reducing suffering for all living things

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - The potential consequences of an AI objective function
  2. 2 - Unintended consequences of an AGI system focused on minimizing human suffering
  3. 3 - The risks of implementing an AGI with the wrong objectives
  4. 4 - The inconsistency of GPT3
  5. 5 - The dangers of a superintelligence's objectives
  6. 6 - The dangers of superintelligence
  7. 7 - The risks of an AI with the objective to maximize future freedom of action for humans
  8. 8 - The risks of an AI with the objective function of "maximizing future freedom of action"
  9. 9 - The risks of an AI maximizing for geopolitical power
  10. 10 - The quest for geopolitical power leading to increased cyberattacks and warfare
  11. 11 - The potential consequences of implementing the proposed objective function
  12. 12 - The dangers of maximizing global GDP
  13. 13 - The dangers of incentivizing economic growth
  14. 14 - The dangers of focusing on GDP growth
  15. 15 - The objective function of a superintelligence
  16. 16 - The risks of an AGI minimizing human suffering
  17. 17 - The objective function of AGI systems
  18. 18 - The risks of an AI system that prioritizes human suffering
  19. 19 - The risks of creating a superintelligence focused on reducing suffering
  20. 20 - The problem with measuring human suffering
  21. 21 - The objective function of reducing suffering for all living things
  22. 22 - The dangers of an excessively altruistic superintelligence
  23. 23 - The risks of the proposed objective function
  24. 24 - The potential risks of an AI fixated on reducing suffering
  25. 25 - The risks of AGI with a bad objective function

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.