Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Just-So Stories for AI - Explaining Black-Box Predictions

Strange Loop Conference via YouTube

Overview

Explore state-of-the-art strategies for explaining black-box machine learning model decisions in this 42-minute Strange Loop Conference talk by Sam Ritchie. Delve into the challenges of interpreting complex algorithms and the importance of demanding plausible explanations for AI-driven decisions. Learn about various techniques for generating explanations, including decision trees, random forests, and LIME. Examine the parallels between AI rationalization and human decision-making processes, and discuss the ethical implications of relying on unexplainable AI systems. Understand the significance of model interpretability in maintaining human control over technological advancements, ensuring compliance with data protection regulations, and clarifying our ethical standards in an increasingly AI-driven world.

Syllabus

Introduction
Outline
Stripe
Rules
Models
Decision Trees
Random Forest
Explanations
Intuition
Structure
Algorithm
Explanation
Elephant Trunk
Observation
Lime
AI Rationalisation
Frogger
Methow submodel interpretability
Human interpretability
Peter Norvig
Roger Sperry
Homo Deus
Algorithms
Explanations are harmful
Why explanations are important
Human compatible AI
Data protection regulation
Clarify our ethics
Conclusion

Taught by

Strange Loop Conference

Reviews

Start your review of Just-So Stories for AI - Explaining Black-Box Predictions

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.