Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Adversarial Examples and Human-ML Alignment

MITCBMM via YouTube

Overview

Explore the concept of adversarial examples and human-machine learning alignment in this lecture by Aleksander Madry from MIT. Delve into the comparison between deep networks and human vision, examining the natural perspective on adversarial examples. Investigate why adversarial perturbations are problematic from both human and machine learning viewpoints. Analyze the robust features model and its implications for interpretability, training modifications, and robustness tradeoffs. Discover how robustness relates to perception alignment and improved representations. Address the challenge of unusual correlations in data and learn about counterfactual analysis using robust models. Gain insights into the origin of adversarial examples stemming from non-robust features in datasets.

Syllabus

Adversarial Examples and Human-ML Alignment Aleksander Madry
Deep Networks: Towards Human Vision?
A Natural View on Adversarial Examples
Why Are Adv. Perturbations Bad?
Human Perspective
ML Perspective
The Robust Features Model
The Simple Experiment: A Second Look
Human vs ML Model Priors
In fact, models...
Consequence: Interpretability
Consequence: Training Modifications
Consequence: Robustness Tradeoffs
Robustness + Perception Alignment
Robustness + Better Representations
Problem: Correlations can be weird
"Counterfactual" Analysis with Robust Models
Adversarial examples arise from non-robust features in the data

Taught by

MITCBMM

Reviews

Start your review of Adversarial Examples and Human-ML Alignment

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.