Explore the fascinating world of adversarial examples in deep learning through this engaging panel discussion. Delve into the intricacies of how attackers can design inputs to fool deep neural networks, causing them to make mistakes ranging from harmless misclassifications to potentially dangerous errors. Learn about the design of adversarial examples, strategies for guarding machine learning models against such attacks, and the relationship between model robustness and size. Join moderator Anil Ananthaswamy and expert panelists Sébastien Bubeck, Melanie Mitchell, and Laurens van der Maaten as they discuss theoretical and practical aspects of this critical topic in artificial intelligence and machine learning.
Overview
Syllabus
Adversarial Examples in Deep Learning
Taught by
Simons Institute