Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - Cynthia Rudin

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - Cynthia Rudin

Institute for Advanced Study via YouTube Direct link

Paper

25 of 32

25 of 32

Paper

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - Cynthia Rudin

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 Bad decisions
  3. 3 Definitions
  4. 4 Why
  5. 5 Article
  6. 6 Crossvalidation
  7. 7 Accuracy interpretability
  8. 8 Two types of machine learning problems
  9. 9 Critically ill patients
  10. 10 Two helps to be
  11. 11 Optimization problem
  12. 12 Saliency maps
  13. 13 Casebased reasoning
  14. 14 My network
  15. 15 Prototype layer
  16. 16 Redbellied woodpecker
  17. 17 Wilsons warbler
  18. 18 Accuracy vs interpretability
  19. 19 Computeraided mammography
  20. 20 Interpretable AI
  21. 21 Case Study
  22. 22 Results
  23. 23 Two Layer Additive Risk Model
  24. 24 Submission to Special Issue
  25. 25 Paper
  26. 26 Problems
  27. 27 Most powerful argument
  28. 28 Papers
  29. 29 Interactive tool
  30. 30 Demo
  31. 31 Optimization
  32. 32 Machine Learning in Medicine

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.