Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 22-minute conference talk from the 24th International Conference on Intelligent User Interfaces that delves into automated rationale generation as a technique for explainable AI. Learn how this approach enables real-time explanation generation by training computational models to translate an autonomous agent's internal state and actions into natural language. Discover the process of collecting explanation data, training neural rationale generators for different styles, and understanding human perceptions of these explanations. Examine two user studies that investigate the plausibility of generated rationales and user preferences regarding confidence, humanlike-ness, justification, and understandability. Gain insights into how detailed rationales can help users form stable mental models of agent behavior, and understand the implications for communicating failure and unexpected behavior in AI systems.