Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera Project Network

Interpretable Machine Learning Applications: Part 4

Coursera Project Network via Coursera

Overview

In this 1-hour long guided project, you will learn how to use the "What-If" Tool (WIT) in the context of training and testing machine learning prediction models. In particular, you will learn a) how to set up a machine learning application in Python by using interactive Python notebook(s) on Google's Colab(oratory) environment, a.k.a. "zero configuration" environment, b) import and prepare the data, c) train and test classifiers as prediction models, d) analyze the behavior of the trained prediction models by using WIT for specific data points (individual basis), e) moving on to the analysis of the behavior of the trained prediction models by using WIT global basis, i.e., all test data considered.

Syllabus

  • Introducing the What-If Tool as Interpretable Machine Learning Application
    • By the end of this project, you will be able to use Google’s What-If Tool as a visualization widget to provide some insights into the behavior of machine learning prediction models at both, individual and global levels. As a use case, we will be working with the dataset about quality of white wines, which is available at https://archive.ics.uci.edu/ml/datasets/wine+quality and two classifiers, a Decision Tree and Random Forest based classifier. Since the approach is independent of the prediction model, it can easily be extended to more complicated models such as the ANN based ones. Given also that this explanation technique is largely based on the visualization of statistical descriptors and the marginal effects of changes in feature-value pairs on the predictions, influencers and dependencies in models can be studied as well as models can be contrasted with each other. In this sense, the project will boost your career not only as ML developer and modeler finding a way to explain and justify the behavior of prediction models varying in complexity, but also as a data scientist and decision-maker in a business environment.

Taught by

Epaminondas Kapetanios

Reviews

4.6 rating at Coursera based on 11 ratings

Start your review of Interpretable Machine Learning Applications: Part 4

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.