Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models

Toronto Machine Learning Series (TMLS) via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical implications of using black box machine learning models for high-stakes decisions in this thought-provoking 49-minute conference talk from the Toronto Machine Learning Series. Delve into the insights of Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University, as she challenges the widespread use of opaque ML models. Examine the serious societal consequences, including flawed bail and parole decisions in criminal justice, that arise from relying on these models. Discover why explanations for black box models can be unreliable and potentially misleading. Learn about the advantages of interpretable machine learning models, which provide inherent explanations faithful to their actual computations. Gain valuable perspectives on the importance of transparency and accountability in AI-driven decision-making processes for high-stakes scenarios.

Syllabus

Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models

Taught by

Toronto Machine Learning Series (TMLS)

Reviews

Start your review of Stop Explaining Black Box ML Models for High Stakes Decisions and Use Interpretable Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.