Overview
Explore the concept of Probably Approximately Correct (PAC) learning in this 31-minute conference talk by Peter Rugg. Delve into the foundations of machine learning, examining what types of problems can be learned and what it means to learn a problem. Understand the PAC framework's approach to specifying worst-case error bounds for problem learnability. Follow the formulation of supervised binary classification and the definition of PAC learning. Investigate methods for determining PAC learnability, covering topics such as proper and improper learning, agnostic learning, and the Vapnik-Chervonenkis dimension. Gain insights into the significance and influence of PAC in machine learning theory, as well as its criticisms.
Syllabus
Intro
Supervised Machine Learning
Problem Parameters
Adversarial (Worst Case) Choices
Proper and Improper Learning
Agnostic Learning
The Theoretical Question
Why Probably (Approximately Correct)?
Learnability Example
Vapnik-Chervonenkis Dimension
VC Dimension and Proper Learnability
Significance and Influence of PAC
Criticisms of PAC
Taught by
Churchill CompSci Talks