Completed
Related Work Comparison
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Content
- 3 Abstract Image Saliency Methods Summary Attention Map Limited by heuristic properties and architectural constraints
- 4 Introduction Current Problems The interpretation for the black box predictor The intuitive visualization method is only heuristic, and the meaning remains unclear.
- 5 Contribution Develop principles and methods to explain any black box function By determine mapping attributes - Internal mechanisms is used to implement these attributes
- 6 Related Work Gradient-based method -Backpropagates the gradient for a class label to the image layer Other methods: DeConvNet, Guided Backprop
- 7 Related Work - CAM
- 8 Related Work Comparison
- 9 Comparison with other saliency methods
- 10 Principle Black bax is a mapping function
- 11 Explanations as meta-predictors Rules are used to explain a robin classifier
- 12 Advantages of Explanations as Meta-predictors The faithfulness of images can be measured as prediction accuracy To find the explanations automatically
- 13 Local Explanations
- 14 Saliency Deleting parts of image x, as the perturbations for the whole image X
- 15 A Meaningful Image Perturbation 11
- 16 Deletion and Preservation
- 17 Artifacts Reduction
- 18 Experiment-Interpretability
- 19 Experiment Testing hypotheses: animal part saliency
- 20 Experiment-Adversarial defense
- 21 Experiment localization and pointing
- 22 Conclusion
- 23 Questions?