Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Graham Neubig via YouTube Direct link

A Simple First Attempt

6 of 20

6 of 20

A Simple First Attempt

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Supervised, Unsupervised, Semi-supervised
  2. 2 Learning Features vs. Learning Discrete Structure
  3. 3 Unsupervised Feature Learning (Review)
  4. 4 How do we Use Learned Features?
  5. 5 What About Discrete Structure?
  6. 6 A Simple First Attempt
  7. 7 Unsupervised Hidden Markov Models • Change label states to unlabeled numbers
  8. 8 Hidden Markov Models w/ Gaussian Emissions • Instead of parameterizing each state with a categorical distribution, we can use a Gaussian (or Gaussian modure)!
  9. 9 Featurized Hidden Markov Models (Tran et al. 2016) • Calculate the transition emission probabilities with neural networks! • Emission: Calculate representation of each word in vocabulary w
  10. 10 CRF Autoencoders (Ammar et al. 2014)
  11. 11 Soft vs. Hard Tree Structure
  12. 12 One Other Paradigm: Weak Supervision
  13. 13 Gated Convolution (Cho et al. 2014)
  14. 14 Learning with RL (Yogatama et al. 2016)
  15. 15 Phrase Structure vs. Dependency Structure
  16. 16 Dependency Model w/ Valence (Klein and Manning 2004)
  17. 17 Unsupervised Dependency Induction w/ Neural Nets (Jiang et al. 2016)
  18. 18 Learning Dependency Heads w/ Attention (Kuncoro et al. 2017)
  19. 19 Learning Segmentations w/ Reconstruction Loss (Elsner and Shain 2017)
  20. 20 Learning Language-level Features (Malaviya et al. 2017) • All previous work learned features of a single sentence

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.