Pragmatic Interpretability - A Human-AI Cooperation Approach
USC Information Sciences Institute via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the concept of pragmatic interpretability in machine learning models through this insightful 53-minute talk by Shi Feng from the University of Illinois, Chicago. Delve into the challenges of understanding how AI models work and their potential for intelligence augmentation. Examine a more practical approach to interpretability that emphasizes modeling human needs in AI cooperation. Learn about evaluating and optimizing human-AI teams as unified decision-makers, and discover how models can learn to explain selectively. Investigate methods for incorporating human intuition into models and explanations outside the context of working with AI. Conclude with a discussion on how models can pragmatically infer information about their human teammates. Gain valuable insights from Shi Feng, a postdoctoral researcher at the University of Chicago, whose work focuses on human-AI cooperation in natural language processing.
Syllabus
Pragmatic Interpretability
Taught by
USC Information Sciences Institute