Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

LinkedIn Learning

Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions

via LinkedIn Learning

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn best practices for how to produce explainable AI and interpretable machine learning solutions.

Syllabus

Introduction
  • Exploring the world of explainable AI and interpretable machine learning
  • Target audience
  • What you should know
1. What Are XAI and IML?
  • Understanding the what and why your models predict
  • Variable importance and reason codes
  • Comparing IML and XAI
  • Trends in AI making the XAI problem more prominent
  • Local and global explanations
  • XAI for debugging models
  • KNIME support of global and local explanations
2. Why Isolating a Variable’s Contribution Is Difficult
  • Challenges of variable attribution with linear regression
  • Challenges of variable attribution with neural networks
  • Rashomon effect
3. Black Box Model 101
  • What qualifies as a black box?
  • Why do we have black box models?
  • What is the accuracy interpretability tradeoff?
  • The argument against XAI
4. Introduction to KNIME for XAI and IML
  • Introducing KNIME
  • Building models in KNIME
  • Understanding looping in KNIME
  • Where to find available KNIME support for XAI
5. XAI Techniques: Global Explanations
  • Providing global explanations with partial dependence plots
  • Using surrogate models for global explanations
  • Developing and interpreting a surrogate model with KNIME
  • Permutation feature importance
  • Global feature importance demo
6. Techniques for Local Explanations
  • Developing an intuition for Shapley values
  • Introducing SHAP
  • Using LIME to provide local explanations for neural networks
  • What are counterfactuals?
  • KNIME's Local Explanation View node
  • XAI View node demonstrating KNIME
7. IML Techniques
  • General advice for better IML
  • Why feature engineering is critical for IML
  • CORELS and recent trends
Conclusion
  • Continuing to explore XAI

Taught by

Keith McCormick

Reviews

4.6 rating at LinkedIn Learning based on 121 ratings

Start your review of Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.