Explore cutting-edge research on explainable AI in this 50-minute conference session from the ACM FAT* 2019 conference. Chaired by Giles Hooker, the session features four presentations covering diverse aspects of AI explainability: actionable recourse in linear classification, model reconstruction from explanations, efficient search for diverse coherent explanations, and a case study on human predictions with explanations versus machine learning models in deception detection. Gain insights into the latest advancements and challenges in making AI systems more transparent and interpretable.
Overview
Syllabus
FAT* 2019: Explainability
Taught by
ACM FAccT Conference