Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Graham Neubig via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore unsupervised learning of structure in natural language processing through this comprehensive lecture from CMU's Neural Networks for NLP course. Delve into the differences between learning features and learning structure, various unsupervised learning methods, and key design decisions for unsupervised models. Examine real-world examples of unsupervised learning, including hidden Markov models, CRF autoencoders, and dependency induction with neural networks. Gain insights into advanced topics such as learning with reinforcement learning, phrase structure vs. dependency structure, and learning language-level features. Access accompanying slides and related course materials to enhance your understanding of this complex subject in computational linguistics and machine learning.

Syllabus

Supervised, Unsupervised, Semi-supervised
Learning Features vs. Learning Discrete Structure
Unsupervised Feature Learning (Review)
How do we Use Learned Features?
What About Discrete Structure?
A Simple First Attempt
Unsupervised Hidden Markov Models • Change label states to unlabeled numbers
Hidden Markov Models w/ Gaussian Emissions • Instead of parameterizing each state with a categorical distribution, we can use a Gaussian (or Gaussian modure)!
Featurized Hidden Markov Models (Tran et al. 2016) • Calculate the transition emission probabilities with neural networks! • Emission: Calculate representation of each word in vocabulary w
CRF Autoencoders (Ammar et al. 2014)
Soft vs. Hard Tree Structure
One Other Paradigm: Weak Supervision
Gated Convolution (Cho et al. 2014)
Learning with RL (Yogatama et al. 2016)
Phrase Structure vs. Dependency Structure
Dependency Model w/ Valence (Klein and Manning 2004)
Unsupervised Dependency Induction w/ Neural Nets (Jiang et al. 2016)
Learning Dependency Heads w/ Attention (Kuncoro et al. 2017)
Learning Segmentations w/ Reconstruction Loss (Elsner and Shain 2017)
Learning Language-level Features (Malaviya et al. 2017) • All previous work learned features of a single sentence

Taught by

Graham Neubig

Reviews

Start your review of Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.