Learning Deep Matrix Factorizations Via Gradient Descent - Implicit Bias Towards Low Rank
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 37-minute conference talk from the Tensor Methods and Emerging Applications to the Physical and Data Sciences 2021 workshop, focusing on learning deep matrix factorizations through gradient descent. Delve into the concept of implicit bias in deep learning scenarios where network parameters outnumber training examples. Examine the simplified setting of linear networks and deep matrix factorizations, investigating how gradient descent algorithms converge to low-rank matrices. Gain insights from rigorous theoretical results in matrix estimation, including an analysis of the dynamics of effective rank in iterates. Consider open problems and potential extensions to learning low-rank tensor decompositions, presented by Holger Rauhut from RWTH Aachen University at the Institute for Pure and Applied Mathematics, UCLA.
Syllabus
Holger Rauhut: "Learning Deep Matrix Factorizations Via Gradient Descent: Implicit Bias Towards ..."
Taught by
Institute for Pure & Applied Mathematics (IPAM)