Completed
Consequence: Robustness Tradeoffs
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Adversarial Examples and Human-ML Alignment
Automatically move to the next video in the Classroom when playback concludes
- 1 Adversarial Examples and Human-ML Alignment Aleksander Madry
- 2 Deep Networks: Towards Human Vision?
- 3 A Natural View on Adversarial Examples
- 4 Why Are Adv. Perturbations Bad?
- 5 Human Perspective
- 6 ML Perspective
- 7 The Robust Features Model
- 8 The Simple Experiment: A Second Look
- 9 Human vs ML Model Priors
- 10 In fact, models...
- 11 Consequence: Interpretability
- 12 Consequence: Training Modifications
- 13 Consequence: Robustness Tradeoffs
- 14 Robustness + Perception Alignment
- 15 Robustness + Better Representations
- 16 Problem: Correlations can be weird
- 17 "Counterfactual" Analysis with Robust Models
- 18 Adversarial examples arise from non-robust features in the data