Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Duke University

Interpretable Machine Learning

Duke University via Coursera

Overview

As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Interpretable Machine Learning, empowering you to develop AI solutions that are aligned with responsible AI principles. You will also gain an understanding of the emerging field of Mechanistic Interpretability and its use in understanding large language models. Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills: 1. Describe interpretable machine learning and differentiate between interpretability and explainability. 2. Explain and implement regression models in Python. 3. Demonstrate knowledge of generalized models in Python. 4. Explain and implement decision trees in Python. 5. Demonstrate knowledge of decision rules in Python. 6. Define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. 7. Explain foundational Mechanistic Interpretability concepts, including features and circuits 8. Describe the Superposition Hypothesis 9. Define Representation Learning and be able to analyze current research on scaling Representation Learning to LLMs. This course is ideal for data scientists or machine learning engineers who have a firm grasp of machine learning but have had little exposure to interpretability concepts. By mastering Interpretable Machine Learning approaches, you'll be equipped to create AI solutions that are not only powerful but also ethical and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have an intermediate understanding of machine learning concepts like supervised learning and neural networks.

Syllabus

  • Regression and Generalized Models
    • In this module, you will be introduced to the concepts of regression and generalized models for interpretability. You will learn how to describe interpretable machine learning and differentiate between interpretability and explainability, explain and implement regression models in Python, and demonstrate knowledge of generalized models in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
  • Rules, Trees, and Neural Networks
    • In this module, you will be introduced to the concepts of decision trees, decision rules, and interpretability in neural networks. You will learn how to explain and implement decision trees and decision rules in Python and define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
  • Introduction to Mechanistic Interpretability
    • In this module, you will be introduced to the concept of Mechanistic Interpretability. You will learn how to explain foundational Mechanistic Interpretability concepts, including features and circuits; describe the Superposition Hypothesis; and define Representation Learning to be able to analyze current research on scaling Representation Learning to LLMs. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.

Taught by

Brinnae Bent, PhD

Reviews

Start your review of Interpretable Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.