Overview
Dive into a comprehensive 9-hour course on AI Safety, exploring crucial aspects of machine learning and ethics. Learn how to shape the development of strong AI systems in a safer direction, covering topics such as robustness, monitoring, alignment, and systemic safety. Explore deep learning fundamentals, risk decomposition, accident models, black swans, adversarial robustness, anomaly detection, and interpretable uncertainty. Delve into transparency, trojans, detecting emergent behavior, honest models, and machine ethics. Examine ML applications in improved decision-making and cyberdefense, as well as cooperative AI concepts. Analyze potential existential hazards, the relationship between AI and evolution, and the balance between safety and capabilities. Gain valuable insights to help reduce existential risks from advanced AI systems and contribute to the field of AI safety research.
Syllabus
Introduction
Deep Learning Review
Risk Decomposition
Accident Models
Black Swans
Adversarial Robustness
Black Swan Robustness
Anomaly Detection
Interpretable Uncertainty
Transparency
Trojans
Detecting Emergent Behavior
Honest Models
Machine Ethics
ML for Improved Decision-Making
ML for Cyberdefense
Cooperative AI
X-Risk Overview
Possible Existential Hazards
AI and Evolution
Safety-Capabilities Balance
Review and Conclusion
Taught by
freeCodeCamp.org
Reviews
5.0 rating, based on 1 Class Central review
Showing Class Central Sort
-
The course was good and it was very informative, it gave a deeper understanding about the whole side of machine learning and how to be safe around using machines.