Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

ML Observability - A Critical Piece for Making Models Work in the Real World

Open Data Science via YouTube

Overview

Explore the critical role of machine learning observability in making models work effectively in real-world scenarios. Learn about common challenges in productionalizing ML and discover essential techniques for monitoring and improving model performance. Dive into drift analysis, performance evaluation, data quality checks, and explainability methods. Gain insights on how to quickly visualize, detect, and diagnose problems with models in production, enabling faster issue resolution. Understand the four pillars of ML observability and their implementation, including performance tracing, data quality monitoring, and model explainability. Discover practical tools and metrics such as KL Divergence and Earth Mover's Distance for measuring drift. By the end of this webinar, acquire the knowledge to ramp up an ML observability practice and enhance your modern ML infrastructure stack.

Syllabus

Introduction
Pain Points
ML Monitoring
Four Pillars
Performance
Fast Actuals
Models without fast actuals
What is drift
Why do we monitor for drift
Metrics to measure drift
KL Divergence
Earth mover distance
Monitors
Data Quality
Explainability
How to implement model explainability
Shaft values
Example
Questions
Arize Platform
Performance Tracing
Integrations
Model Drift
Performance Trace
Drift Tab
Dashboards
Monitoring

Taught by

Open Data Science

Reviews

Start your review of ML Observability - A Critical Piece for Making Models Work in the Real World

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.