Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore sparsification techniques for generalized linear models in this 56-minute lecture by Yang Liu from Stanford University. Delve into the mathematical foundations of sparsifying sums F: R^n → R_+ where F(x) = f_1(<a_1,x>) + ... + f_m(<a_m,x>). Learn about the existence of (1+ε)-approximate sparsifiers with support size n/ε^2 (log n/ε)^O(1) for symmetric, monotone functions satisfying natural growth bounds. Discover efficient algorithms for computing such sparsifiers and their applications in optimizing various generalized linear models, including ℓ_p regression. Gain insights into near-optimal reductions for high-accuracy optimization through solving sparse regression instances. Understand the implications of this work, which generalizes classic ℓ_p sparsification and provides the first near-linear size sparsifiers for Huber loss function and its generalizations.