Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a groundbreaking approach to certifying the safety and robustness of neural networks using abstract interpretation in this IEEE conference talk. Delve into AI2, the first sound and scalable analyzer for deep neural networks, capable of automatically proving safety properties of realistic convolutional neural networks. Learn how the speaker leverages decades of advances in abstract interpretation to reason about neural network safety and robustness. Discover the abstract transformers introduced to capture the behavior of fully connected and convolutional neural network layers with ReLU activations and max pooling layers. Examine the implementation and evaluation results of AI2 on 20 neural networks, showcasing its precision in proving useful specifications, ability to certify state-of-the-art defense effectiveness, superior speed compared to symbolic analysis-based analyzers, and capacity to handle deep convolutional networks beyond the reach of existing methods.
Syllabus
Introduction
Demonstration
Other adversarial attacks
Robustness violations
Problem statement
Imputation
Domain Types
Neural Networks
Fully Connected Layers
Next Pooling Layers
Results
Comparison
Future Improvements
Summary
Taught by
IEEE Symposium on Security and Privacy