Neural Nets for NLP 2017 - Adversarial Learning

Neural Nets for NLP 2017 - Adversarial Learning

Graham Neubig via YouTube Direct link

In Equations

7 of 20

7 of 20

In Equations

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Neural Nets for NLP 2017 - Adversarial Learning

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Generative Models
  3. 3 Adversarial Training
  4. 4 Basic Paradigm
  5. 5 Problems with Generation • Over-emphasis of common outputs, fuzziness Adversarial
  6. 6 Training Method
  7. 7 In Equations
  8. 8 Problems w/ Training
  9. 9 Applications of GAN Objectives to Language
  10. 10 Problem! Can't Backprop through Sampling
  11. 11 Solution: Use Learning Methods for Latent Variables
  12. 12 Discriminators for Sequences
  13. 13 Stabilization Trick
  14. 14 Interesting Application: GAN for Data Cleaning (Yang et al. 2017)
  15. 15 Adversaries over Features vs. Over Outputs
  16. 16 Learning Domain-invariant Representations (Ganin et al. 2016) • Learn features that cannot be distinguished by domain
  17. 17 Adversarial Multi-task Learning (Liu et al. 2017)
  18. 18 Implicit Discourse Connection Classification w/ Adversarial Objective
  19. 19 Professor Forcing (Lamb et al. 2016)
  20. 20 Unsupervised Style Transfer for Text (Shen et al. 2017)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.