Overview
Learn how to monitor machine learning models in production and keep them healthy in this 37-minute lecture from the Full Stack Deep Learning Spring 2021 series. Explore the reasons behind model performance degradation post-deployment, understand data drift, and discover what aspects of your models to monitor. Gain insights into measuring changes, determining if changes are detrimental, and familiarize yourself with monitoring tools. Examine the relationship between monitoring and your broader ML system, and conclude with key takeaways for maintaining optimal model performance in real-world applications.
Syllabus
​ - Introduction
​ - Model Performance Degrades Post-Deployment
​ - Data Drift
​ - What To Monitor?
​ - How To Measure When Things Change
​ - How To Tell If A Change Is Bad
​ - Tools For Monitoring
​ - Monitoring And Your Broader ML System
- Takeaways
Taught by
The Full Stack