Evaluating Neural Network Robustness - Targeted Attacks and Defenses
University of Central Florida via YouTube
Overview
Explore the robustness of neural networks in this 20-minute lecture from the University of Central Florida. Delve into targeted attack metrics, existing attacks like Fast Gradient Sign and Jacobian-based Saliency Map Attack, and new approaches to evaluating neural network vulnerability. Examine objective functions, box constraints, and methods for finding the best combination of attacks. Learn about attack evaluation techniques and their application to ImageNet datasets. Conclude with an introduction to defensive distillation as a potential countermeasure against adversarial attacks.
Syllabus
Intro
Summary: Terminology (cont.) Targeted Attack Metrics
Existing Attacks
Fast Gradient Sign (FGS)
Jacobian-based Saliency Map Attack (SMA)
New approach
Objective Functions Explored
Dealing with Box Constraints: x+8 € [0, 1]
Finding Best Combination
Different Attacks (Cont.)
Attack Evaluation
Attacks on ImageNet
Defensive Distillation
Taught by
UCF CRCV