Overview
Explore the profound implications of superintelligent AI on human society in this 33-minute RSA Conference talk by Nick Bostrom, Professor at the University of Oxford and Director of the Future of Humanity Institute. Delve into distinctive safety concerns surrounding advanced artificial intelligence and examine recent developments in conceptual models for addressing these issues. Learn about the modern human condition, potential extinction risks, the history of AI, machine learning, and deep learning. Investigate time scales, scaleable control, strategic behavior, optimal policy, and AI training methods. Gain valuable insights into the challenges and opportunities presented by the transition to the machine intelligence era, and understand the importance of developing robust safety measures for advanced AI systems.
Syllabus
Introduction
Modern Human Condition
Human Condition
Extinction
History of AI
Machine Learning
Go
Deep Learning
The Past
Time Scale
Conclusion
General Wisdom
Scaleable Control
Strategic Behavior
Optimal Policy
Training an AI
Summary
Taught by
RSA Conference