CMU Neural Nets for NLP - Structured Prediction Basics

CMU Neural Nets for NLP - Structured Prediction Basics

Graham Neubig via YouTube Direct link

Hinge Loss for Any Classifier! We can swap cross-entropy for hinge loss anytime

13 of 17

13 of 17

Hinge Loss for Any Classifier! We can swap cross-entropy for hinge loss anytime

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

CMU Neural Nets for NLP - Structured Prediction Basics

Automatically move to the next video in the Classroom when playback concludes

  1. 1 A Prediction Problem
  2. 2 Types of Prediction
  3. 3 Why Call it "Structured" Prediction?
  4. 4 Many Varieties of Structured Prediction!
  5. 5 Sequence Labeling w
  6. 6 Why Model Interactions in Output? . Consistency is important!
  7. 7 A Tagger Considering Output Structure movie
  8. 8 Training Structured Models
  9. 9 Local Normalization and
  10. 10 The Structured Perceptron Algorithm . An extremely simple way of training (non-probabilistic) global models . Find the one-best, and it's score is better than the correct answer adjust parameters to …
  11. 11 Contrasting Perceptron and Global Normalization
  12. 12 Structured Training and Pre-training
  13. 13 Hinge Loss for Any Classifier! We can swap cross-entropy for hinge loss anytime
  14. 14 Cost-augmented Hinge
  15. 15 Costs over Sequences
  16. 16 Cost-Augmented Decoding for Hamming Loss
  17. 17 Solution 1: Sample Mistakes in Training (Ross et al. 2010)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.