Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Evaluating Neural Network Robustness - Targeted Attacks and Defenses

University of Central Florida via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the robustness of neural networks in this 20-minute lecture from the University of Central Florida. Delve into targeted attack metrics, existing attacks like Fast Gradient Sign and Jacobian-based Saliency Map Attack, and new approaches to evaluating neural network vulnerability. Examine objective functions, box constraints, and methods for finding the best combination of attacks. Learn about attack evaluation techniques and their application to ImageNet datasets. Conclude with an introduction to defensive distillation as a potential countermeasure against adversarial attacks.

Syllabus

Intro
Summary: Terminology (cont.) Targeted Attack Metrics
Existing Attacks
Fast Gradient Sign (FGS)
Jacobian-based Saliency Map Attack (SMA)
New approach
Objective Functions Explored
Dealing with Box Constraints: x+8 € [0, 1]
Finding Best Combination
Different Attacks (Cont.)
Attack Evaluation
Attacks on ImageNet
Defensive Distillation

Taught by

UCF CRCV

Reviews

Start your review of Evaluating Neural Network Robustness - Targeted Attacks and Defenses

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.