Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn best practices for how to produce explainable AI and interpretable machine learning solutions.
Syllabus
Introduction
- Exploring the world of explainable AI and interpretable machine learning
- Target audience
- What you should know
- Understanding the what and why your models predict
- Variable importance and reason codes
- Comparing IML and XAI
- Trends in AI making the XAI problem more prominent
- Local and global explanations
- XAI for debugging models
- KNIME support of global and local explanations
- Challenges of variable attribution with linear regression
- Challenges of variable attribution with neural networks
- Rashomon effect
- What qualifies as a black box?
- Why do we have black box models?
- What is the accuracy interpretability tradeoff?
- The argument against XAI
- Introducing KNIME
- Building models in KNIME
- Understanding looping in KNIME
- Where to find available KNIME support for XAI
- Providing global explanations with partial dependence plots
- Using surrogate models for global explanations
- Developing and interpreting a surrogate model with KNIME
- Permutation feature importance
- Global feature importance demo
- Developing an intuition for Shapley values
- Introducing SHAP
- Using LIME to provide local explanations for neural networks
- What are counterfactuals?
- KNIME's Local Explanation View node
- XAI View node demonstrating KNIME
- General advice for better IML
- Why feature engineering is critical for IML
- CORELS and recent trends
- Continuing to explore XAI
Taught by
Keith McCormick