Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

A New Perspective on Adversarial Perturbations

Simons Institute via YouTube

Overview

Explore a new perspective on adversarial perturbations in deep learning through this 49-minute lecture by Aleksander Madry from MIT. Delve into the key problem of adversarial perturbations and examine machine learning through the lens of adversarial robustness. Compare human and ML perspectives, analyze a simple experiment, and understand the robust features model. Investigate the differences between human and ML model priors, and discover new capabilities such as robustification and transferability. Learn about the role of robust training, gain a new take on randomized smoothing, and explore the relationship between robustness and data efficiency. Examine a simple theoretical setting using max likelihood Gaussian classification, and understand how robustness aligns with perception and computer vision applications. Conclude by recognizing that adversarial examples arise from non-robust features in the data.

Syllabus

Why do we love deep learning?
Key Problem: Adversarial Perturbations
ML via Adversarial Robustness Lens
Human Perspective
ML Perspective
A Simple Experiment
The Robust Features Model
The Simple Experiment: A Second Look
Human vs ML Model Priors
New capability: Robustification
A Natural Consequence: Transferability
The Role of Robust Training
New Take on Randomized Smoothing
Robustness and Data Efficiency
A Simple Theoretical Setting: Max Likelihood Gaussian Classification
Robustness + Perception Alignment
Robustness + CV Applications
Adversarial examples arise from non-robust features in the data

Taught by

Simons Institute

Reviews

Start your review of A New Perspective on Adversarial Perturbations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.