Towards Good Validation Metrics for Generative Models in Offline Model-Based Optimization
Valence Labs via YouTube
Overview
Explore a comprehensive conference talk on developing effective validation metrics for generative models in offline model-based optimization. Dive into the proposed evaluation framework that measures a generative model's ability to extrapolate by interpreting training and validation splits as draws from truncated ground truth distributions. Examine the challenge of determining which validation metric best correlates with the expected value of generated candidates relative to the ground truth oracle. Compare various validation metrics for generative adversarial networks using this framework. Discuss limitations of the approach with existing datasets and potential mitigation strategies. Learn about model-based optimization, its naive approach, and how to view it through a generative modeling lens. Review methods for evaluating machine learning models and their application to model-based optimization. Investigate different types of datasets used in this field and approaches when ground truth is unavailable. Conclude with key takeaways and engage in a Q&A session to deepen understanding of this cutting-edge research.
Syllabus
- Intro
- Model-based Optimization MBO & the Naive Approach
- Looking at MBO Through Generative Modelling Lens
- Refresher on Evaluating Models in ML
- Evaluation for MBO
- Types of MBO Datasets
- When Ground Truth is Not Available
- Conclusion
- Q+A
Taught by
Valence Labs