Completed
Gated Convolution (Cho et al. 2014)
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Neural Nets for NLP 2017 - Unsupervised Learning of Structure
Automatically move to the next video in the Classroom when playback concludes
- 1 Supervised, Unsupervised, Semi-supervised
- 2 Learning Features vs. Learning Discrete Structure
- 3 Unsupervised Feature Learning (Review)
- 4 How do we Use Learned Features?
- 5 What About Discrete Structure?
- 6 A Simple First Attempt
- 7 Unsupervised Hidden Markov Models • Change label states to unlabeled numbers
- 8 Hidden Markov Models w/ Gaussian Emissions • Instead of parameterizing each state with a categorical distribution, we can use a Gaussian (or Gaussian modure)!
- 9 Featurized Hidden Markov Models (Tran et al. 2016) • Calculate the transition emission probabilities with neural networks! • Emission: Calculate representation of each word in vocabulary w
- 10 CRF Autoencoders (Ammar et al. 2014)
- 11 Soft vs. Hard Tree Structure
- 12 One Other Paradigm: Weak Supervision
- 13 Gated Convolution (Cho et al. 2014)
- 14 Learning with RL (Yogatama et al. 2016)
- 15 Phrase Structure vs. Dependency Structure
- 16 Dependency Model w/ Valence (Klein and Manning 2004)
- 17 Unsupervised Dependency Induction w/ Neural Nets (Jiang et al. 2016)
- 18 Learning Dependency Heads w/ Attention (Kuncoro et al. 2017)
- 19 Learning Segmentations w/ Reconstruction Loss (Elsner and Shain 2017)
- 20 Learning Language-level Features (Malaviya et al. 2017) • All previous work learned features of a single sentence