Explore a 53-minute conference talk by Fanny Yang from ETH Zurich on "Surprising phenomena of max-lp-margin classifiers in high dimensions" presented at IPAM's Theory and Practice of Deep Learning Workshop. Delve into the analysis of max-lp-margin classifiers, examining their relevance to the implicit bias of first-order methods and harmless interpolation in neural networks. Discover unexpected findings in the noiseless case, where minimizing l1-norm achieves optimal rates for regression with hard-sparse ground truths, but this adaptivity doesn't directly apply to max l1-margin classification. Investigate how max-lp-margin classifiers can achieve 1/√n rates for p slightly larger than one in noisy observations, while maximum l1-margin classifiers only achieve rates of order 1/√(log(d/n)). Gain insights into cutting-edge research in machine learning theory and its implications for deep learning practices.
Surprising Phenomena of Max-LP-Margin Classifiers in High Dimensions
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Syllabus
Fanny Yang - Surprising phenomena of max-lp-margin classifiers in high dimensions - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)