Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into a 38-minute conference talk exploring best practices, principles, patterns, and techniques for production monitoring of machine learning models. Learn how to apply standard microservice monitoring techniques to deployed ML models and explore advanced paradigms like concept drift, outlier detection, and explainability. Follow a hands-on example of training an image classification model, deploying it as a microservice in Kubernetes, and implementing advanced monitoring components. Discover architectural patterns that abstract complex monitoring techniques into scalable infrastructural components, enabling monitoring across numerous heterogeneous ML models. Gain insights into AI Explainers, Outlier Detectors, Concept Drift detectors, and Adversarial Detectors, as well as standardized interfaces for large-scale monitoring implementation.
Syllabus
Introduction
Welcome
Overview
Motivations
Practical Use Case
Production Use Case
Deployment
Microservice
Machine Learning Monitoring Anatomy
Performance Monitoring Principles
Performance Monitoring Patterns
Performance Monitoring Metrics
Metric Servers
Outlier and Drift
Albedo Detect
Outlier Detect
Drift Detect
Outlier Detector
Explainability
AlibiExplain
AlibiDetect
Architectural Patterns
Summary
Outro
Taught by
Open Data Science