Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Mandoline - Model Evaluation under Distribution Shift

Stanford University via YouTube

Overview

Explore a comprehensive framework for evaluating machine learning models under distribution shift in this Stanford University lecture. Dive into the Mandoline approach, which leverages user-defined "slicing functions" to guide importance weighting and improve performance estimation on target distributions. Learn how this method outperforms standard baselines in NLP and vision tasks, and understand its connection to interactive ML systems. Gain insights into the theoretical foundations of the framework, including density ratio estimation and its error scaling. Discover the broader implications for model evaluation, hidden stratification, and iterative model development processes in the context of deploying ML models in real-world scenarios.

Syllabus

Intro
Outline
The ML model development process
Model Evaluation
Motivation
Common approach: importance weighting
Motivating example
Mandoline: Slice-based reweighting framework
The theory behind using slices
More formally...
Density Ratio Estimation
Experiments: tasks
Experiments: compare to reweighting on x
Summary
Taking a step back - how do we get slices? What are sli
Measuring model performance
Hidden Stratification: Approach
ML model development process, revisited
Another angle - how else can we evaluate?
"Closing the loop" - how do we update?

Taught by

Stanford MedAI

Reviews

Start your review of Mandoline - Model Evaluation under Distribution Shift

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.