Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the fascinating world of adversarial examples in deep learning through this engaging panel discussion. Delve into the intricacies of how attackers can design inputs to fool deep neural networks, causing them to make mistakes ranging from harmless misclassifications to potentially dangerous errors. Learn about the design of adversarial examples, strategies for guarding machine learning models against such attacks, and the relationship between model robustness and size. Join moderator Anil Ananthaswamy and expert panelists Sébastien Bubeck, Melanie Mitchell, and Laurens van der Maaten as they discuss theoretical and practical aspects of this critical topic in artificial intelligence and machine learning.