Overview
Explore machine learning interpretability in a 33-minute talk delivered by Moneyfarm's Head of Data, Giorgio Clauser, covering the evolution of AI model explainability from basic beta coefficients to advanced SHAPly values. Master both global and local explainability concepts through practical Python Jupyter Notebook demonstrations, with special focus on complex models like XGBOOST and their application in binary classification. Learn how to address regulatory requirements and human needs for AI understanding while gaining hands-on experience in data visualization and model interpretation techniques. Discover practical approaches to embedding SHAPly values in production environments, evaluate the effectiveness of SHAP implementations, and understand contemporary machine learning models through real-world fintech examples. Progress through key topics including predictive model outputs, AI governance, ML security considerations, and the future of continual visual learning in artificial intelligence.
Syllabus
Introduction -
Output of predictive models should be explainable -
Global and local explainability -
Embedding SHAPly values in production output -
Contemporary machine learning models-
Is using SHAP a good idea? -
Questions -
The Secret To Great AI
Build a career in data-
Flawed ML Security-
AI Governance -
Continual Visual Learning -
Data Summit! -
Taught by
Open Data Science