Completed
Humans are context dependent
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Optimizing for Interpretability in Deep Neural Networks - Mike Wu
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 The challenge of interpretability
- 3 Lots of different definitions and ideas
- 4 Asking the model questions
- 5 A conversation with the model
- 6 A case for human simulation
- 7 Simulatable?
- 8 Post-Hoc Analysis
- 9 Interpretability as a regularizer
- 10 Average Path Length
- 11 Problem Setup
- 12 Tree Regularization (Overview)
- 13 Toy Example for Intuition
- 14 Humans are context dependent
- 15 Regional Tree Regularization
- 16 Example: Three Kinds of Interpretability
- 17 MIMIC III Dataset
- 18 Evaluation Metrics
- 19 Results on MIMIC III
- 20 A second application: treatment for HIV
- 21 Distilled Decision Tree
- 22 Caveats and Gotchas
- 23 Regularizing for Interpretability