Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about different methods for evaluating AI explanations in this comprehensive lecture that covers key concepts including explanation evaluation taxonomy, simulatability, and application-grounded evaluations. Explore how reliance on AI systems develops and examine complementary team performance between humans and AI. Delve into important metrics like self-reported confidence and trust in AI systems. Gain practical insights into how explanations from AI systems can be systematically assessed and improved through various evaluation frameworks and methodologies. The lecture provides both theoretical foundations and practical applications for understanding how to effectively evaluate AI system explanations in real-world contexts.
Syllabus
Announcements
Taxonomy of explanation evaluation
Simulatability
Application-grounded evaluations
Reliance
Complementary team performance
Self-reported confidence and trust
Taught by
UofU Data Science