Completed
Deep DA / Domain-Adversarial Nets
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Robust Deep Learning Under Distribution Shift
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Outline
- 3 Standard assumptions
- 4 Adversarial Misspellings (Char-Level Attack)
- 5 Curated Training Task Fail to Represent Reality
- 6 Feedback Loops
- 7 Impossibility absent assumptions
- 8 Detecting and correcting for label shift with black box predictors
- 9 Motivation 1: Pneumonia prediction
- 10 Epidemic
- 11 Motivation 2: Image Classification
- 12 The test-Item effect
- 13 Domain Adaptation - Formal Setup
- 14 Label Shift (aka Target Shift)
- 15 Contrast with Covariate Shift
- 16 Black Box Shift Estimation (BBSE)
- 17 Confusion matrices
- 18 Applying the label shift assumption...
- 19 Consistency
- 20 Error bound
- 21 Detection
- 22 Estimation error (MNIST)
- 23 Black Box Shift Correction (CIFAR10 w IW-ERM)
- 24 A General Pipeline for Detecting Shift
- 25 Non-adversarial image perturbations
- 26 Detecting adversarial examples
- 27 Covariate shift + model misspecification
- 28 Implicit bias of SGD on linear networks w. linearly separable data
- 29 Impact of IW on ERM decays over MLP training
- 30 Weight-Invariance after 1000 epochs
- 31 L2 Regularization v Dropout
- 32 Deep DA / Domain-Adversarial Nets
- 33 Synthetic experiments