Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Foundations of Deep Learning and AI: Instabilities, Limitations, and Potential

Society for Industrial and Applied Mathematics via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the foundations of computational mathematics, Smale's 18th problem, and potential limitations of AI in this one-hour virtual seminar talk by Anders Hansen from the University of Cambridge. Delve into the paradox of universally unstable deep learning systems despite the Universal Approximation Theorem's guarantee of stable neural networks. Examine the parallels between current AI optimism and early 20th-century mathematical optimism, and consider how AI might face similar limitations. Investigate the existence of neural networks that approximate classical scientific computing mappings, yet cannot be computed accurately by any algorithm. Analyze the inherent instability in deep learning methodologies and its implications for classification problems. Address the concerning issue of AI-generated hallucinations in medical imaging challenges and its connection to instability. Gain insights into the mathematical setup of deep learning, universal instability theorems, and the challenges of training and computing neural networks.

Syllabus

Intro
The impact of deep learning is unprecedented
How do we determine the foundations of DL?
Instabilities in classification/decision problems
Al techniques replace doctors
Transforming image reconstruction with Al
Comparison with state-of-the-art
Instability of DL in Inverse Problems - MRI
The press reports on instabilities
Hilbert's program on the foundations of mathematics
Program on the foundations of DL and Al
Should we expect instabilities in deep learning?
The instabilities in classification cannot be cured
The mathematical setup
Trained DL NNs yield small error on training data
Universal instability theorem
Al-generated hallucinations and instability
Gaussian perturbations and AUTOMAP
Sharpness of Theorem 3
Can neural networks be trained/computed?
Kernel awareness in compressed sensing
Kernel awareness is essential
Worst case perturbations for AUTOMAP
Conclusion
New book coming

Taught by

Society for Industrial and Applied Mathematics

Reviews

Start your review of Foundations of Deep Learning and AI: Instabilities, Limitations, and Potential

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.