Completed
ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Why Do Our Models Learn?
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Machine Learning Can Be Unreliable
- 3 Indeed: Machine Learning is Brittle
- 4 Backdoor Attacks
- 5 Key problem: Our models are merely (excellent!) correlation extractors Cats
- 6 Indeed: Correlations can be weird
- 7 Simple Setting: Background bias
- 8 Do Backgrounds Contain Signal?
- 9 ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
- 10 Adversarial Backgrounds
- 11 Background-Robust Models?
- 12 How Are Datasets Created?
- 13 Dataset Creation in Practice
- 14 Consequence: Benchmark-Task Misalignment
- 15 Prerequisite: Detailed Annotations
- 16 Ineffective Data Filtering
- 17 Multiple objects
- 18 Human-Label Disagreement
- 19 Human-Based Evaluation
- 20 Human vs ML Model Priors
- 21 Consequence: Adversarial Examples Illyas Santurkar Tsipras Engstrom Tran M 2019 (Standard) models tend to lean on "non-robust" features + Adversarial perturbations manipulate these features
- 22 Consequence: Interpretability
- 23 Consequence: Training Modifications
- 24 Robustness + Perception Alignment
- 25 Robustness + Better Representations
- 26 Counterfactual Analysis with Robust Models
- 27 ML Research Pipeline