Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

USENIX via YouTube

Overview

Explore the challenges and lessons learned in evaluating defenses against adversarial examples in machine learning classifiers during this 48-minute USENIX Security '19 conference talk. Delve into common evaluation pitfalls, recommendations for thorough defense assessments, and comparisons between this emerging research field and established security evaluation practices. Gain insights from Research Scientist Nicholas Carlini of Google Research as he surveys the ways defenses have been broken and discusses the implications for future research. Learn about adversarial training, input transformations, and the importance of robust evaluation techniques in developing resilient machine learning models.

Syllabus

Introduction
Adversarial Examples
Why Care
What are Defenses
Adversarial Training
Thermometer Encoding
Input Transformation
Evaluating the robustness
Why are defenses easily broken
Lessons Learned
Adversary Training
Empty Set
Evaluating Adversely
Actionable Advice
Evaluation
Holding Out Data
FGSM
Gradient Descent
No Bounds
Random Classification
Negative Things
Evaluate Against the Worst Attack
Accuracy vs Distortion
Verification
Gradient Free
Random Noise
Conclusion
AES 1997
Attack success rates in insecurity
Why are we not yet crypto
How much we can prove
Still a lot of work to do
L2 Distortion
We dont know what we want
We dont have that today
Summary
Questions

Taught by

USENIX

Reviews

Start your review of Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.