Why Do Our Models Learn?

Why Do Our Models Learn?

MITCBMM via YouTube Direct link

Consequence: Training Modifications

23 of 27

23 of 27

Consequence: Training Modifications

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Why Do Our Models Learn?

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Machine Learning Can Be Unreliable
  3. 3 Indeed: Machine Learning is Brittle
  4. 4 Backdoor Attacks
  5. 5 Key problem: Our models are merely (excellent!) correlation extractors Cats
  6. 6 Indeed: Correlations can be weird
  7. 7 Simple Setting: Background bias
  8. 8 Do Backgrounds Contain Signal?
  9. 9 ImageNet-9: A Fine-Grained Study Xiao Engstrom Ilyas M 2020
  10. 10 Adversarial Backgrounds
  11. 11 Background-Robust Models?
  12. 12 How Are Datasets Created?
  13. 13 Dataset Creation in Practice
  14. 14 Consequence: Benchmark-Task Misalignment
  15. 15 Prerequisite: Detailed Annotations
  16. 16 Ineffective Data Filtering
  17. 17 Multiple objects
  18. 18 Human-Label Disagreement
  19. 19 Human-Based Evaluation
  20. 20 Human vs ML Model Priors
  21. 21 Consequence: Adversarial Examples Illyas Santurkar Tsipras Engstrom Tran M 2019 (Standard) models tend to lean on "non-robust" features + Adversarial perturbations manipulate these features
  22. 22 Consequence: Interpretability
  23. 23 Consequence: Training Modifications
  24. 24 Robustness + Perception Alignment
  25. 25 Robustness + Better Representations
  26. 26 Counterfactual Analysis with Robust Models
  27. 27 ML Research Pipeline

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.