Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Machine Learning Explainability: From Beta Coefficients to SHAP Values

Open Data Science via YouTube

Overview

Explore machine learning interpretability in a 33-minute talk delivered by Moneyfarm's Head of Data, Giorgio Clauser, covering the evolution of AI model explainability from basic beta coefficients to advanced SHAPly values. Master both global and local explainability concepts through practical Python Jupyter Notebook demonstrations, with special focus on complex models like XGBOOST and their application in binary classification. Learn how to address regulatory requirements and human needs for AI understanding while gaining hands-on experience in data visualization and model interpretation techniques. Discover practical approaches to embedding SHAPly values in production environments, evaluate the effectiveness of SHAP implementations, and understand contemporary machine learning models through real-world fintech examples. Progress through key topics including predictive model outputs, AI governance, ML security considerations, and the future of continual visual learning in artificial intelligence.

Syllabus

Introduction -
Output of predictive models should be explainable -
Global and local explainability -
Embedding SHAPly values in production output -
Contemporary machine learning models-
Is using SHAP a good idea? -
Questions -
The Secret To Great AI
Build a career in data-
Flawed ML Security-
AI Governance -
Continual Visual Learning -
Data Summit! -

Taught by

Open Data Science

Reviews

Start your review of Machine Learning Explainability: From Beta Coefficients to SHAP Values

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.