Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models
Toronto Machine Learning Series (TMLS) via YouTube
Overview
Explore the critical implications of using black box machine learning models for high-stakes decisions in this thought-provoking 49-minute conference talk from the Toronto Machine Learning Series. Delve into the insights of Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University, as she challenges the widespread use of opaque ML models. Examine the serious societal consequences, including flawed bail and parole decisions in criminal justice, that arise from relying on these models. Discover why explanations for black box models can be unreliable and potentially misleading. Learn about the advantages of interpretable machine learning models, which provide inherent explanations faithful to their actual computations. Gain valuable perspectives on the importance of transparency and accountability in AI-driven decision-making processes for high-stakes scenarios.
Syllabus
Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models
Taught by
Toronto Machine Learning Series (TMLS)