Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Testing ML Models in Production - Detecting Data and Concept Drift

Databricks via YouTube

Overview

Explore a comprehensive 55-minute conference talk on testing machine learning models in production. Learn about core statistical tests and metrics for detecting data and concept drift, preventing models from becoming stale and detrimental to business. Dive deep into implementing robust testing and monitoring frameworks using open-source tools like MLflow, SciPy, and statsmodels. Gain valuable insights from Databricks' customer experiences and discover key tenets for testing model and data validity in production. Walk through a generalizable demo utilizing MLflow to enhance reproducibility. Cover topics including the ML cycle, data monitoring, KS tests, categorical features, one-way chi-squared tests, monitoring tools, MLflow notebooks, ML workflows, data logging, model registry, feature checks, and model staging and migration.

Syllabus

Intro
ML Cycle
Data Monitoring
KS Test
Categorical Features
Oneway Chisquared
Monitoring Tests
Tools
ML Flow
Notebooks
ML Workflow
ML Flow Delta
Other Notebooks
Widgets
Notebook Setup
Train Cycle Learn Pipeline
Data Logging
ML Flow Run
ML Flow Model Registry
ML Flow Experiment
Model Registry
New Data
Feature Checks
ML Flow Registry
Null Proportion
New Incoming Data
Chisquared Test
Action
Model Parameters
Model Staging
Model Migration
Missingness Check
Price Check
Categorical Check
Recap

Taught by

Databricks

Reviews

Start your review of Testing ML Models in Production - Detecting Data and Concept Drift

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.