Overview
Explore the vulnerabilities of deep learning systems in this 38-minute conference talk from Hack.lu 2016. Dive into the world of machine duping as Clarence Chio, a Stanford-educated Security Research Engineer, demonstrates how to manipulate popular deep learning software without detection. Learn about the inherent shortcomings of neural networks in adversarial settings and witness real-world tampering of image recognition, speech recognition, and phishing detection systems. Gain insights into attack taxonomies, adversarial deep learning techniques, and black box attack methodologies. Discover the implications of deep learning on privacy and explore potential defense mechanisms for these critical systems. This demo-driven session provides a comprehensive overview of deep learning security challenges and practical examples of exploiting vulnerabilities in AI-powered applications.
Syllabus
Intro
Training Deep Neural Networks
Attack Taxonomy
Why can we do this?
Adversarial Deep Learning
Deep Neural Network Attacks
What can you do with limited knowledge?
Black box attack methodology
Defending the machines
Deep Learning and Privacy
Taught by
Cooper