AGI Alignment Experiments - INSTRUCT vs Foundation and Agent Models

AGI Alignment Experiments - INSTRUCT vs Foundation and Agent Models

David Shapiro ~ AI via YouTube Direct link

- Alignment as a system, not a single model

2 of 15

2 of 15

- Alignment as a system, not a single model

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

AGI Alignment Experiments - INSTRUCT vs Foundation and Agent Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - The importance of interpretability in AI alignment
  2. 2 - Alignment as a system, not a single model
  3. 3 - The importance of testing intelligence in complex systems
  4. 4 - The Right Research
  5. 5 - Self-stabilizing systems
  6. 6 - Alignment experiment 2
  7. 7 - The second agent model
  8. 8 - Alignment research with existing technology
  9. 9 - Alignment research on superintelligence
  10. 10 - The dangers of nanotechnology and genetic alteration
  11. 11 - The dangers of an AI with no hard goals
  12. 12 - The instability of a simple for loop
  13. 13 - The process of creating a machine that can write novels
  14. 14 - The stability of instruct models
  15. 15 - The stability of agent models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.