Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Learning Deep Matrix Factorizations Via Gradient Descent - Implicit Bias Towards Low Rank

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Explore a 37-minute conference talk from the Tensor Methods and Emerging Applications to the Physical and Data Sciences 2021 workshop, focusing on learning deep matrix factorizations through gradient descent. Delve into the concept of implicit bias in deep learning scenarios where network parameters outnumber training examples. Examine the simplified setting of linear networks and deep matrix factorizations, investigating how gradient descent algorithms converge to low-rank matrices. Gain insights from rigorous theoretical results in matrix estimation, including an analysis of the dynamics of effective rank in iterates. Consider open problems and potential extensions to learning low-rank tensor decompositions, presented by Holger Rauhut from RWTH Aachen University at the Institute for Pure and Applied Mathematics, UCLA.

Syllabus

Holger Rauhut: "Learning Deep Matrix Factorizations Via Gradient Descent: Implicit Bias Towards ..."

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Learning Deep Matrix Factorizations Via Gradient Descent - Implicit Bias Towards Low Rank

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.