Watch a 19-minute conference talk from the 2022 Symposium on Foundations of Responsible Computing (FORC) exploring the convergence and similarities between two popular machine learning interpretation techniques - SmoothGrad and LIME. Learn how these post hoc explanation methods, when analyzed mathematically, converge to equivalent results given enough samples, leading to insights about their robustness and linearity properties. Discover the theoretical foundations and practical implications through detailed mathematical derivations and experimental validation on both synthetic and real-world datasets, as presented by Sushant Agarwal from the University of Waterloo. Gain valuable insights into making black box machine learning models more interpretable and reliable for critical applications in healthcare and criminal justice.
Towards Unification and Robustness of Perturbation and Gradient Based Explanations
Harvard CMSA via YouTube
Overview
Syllabus
Introduction
Explanationability
Setup
Contributions
Smooth Grid
Smooth CRAT
Summary
Taught by
Harvard CMSA