Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore structured learning algorithms in this advanced Natural Language Processing lecture from Carnegie Mellon University. Delve into reinforcement learning, minimum risk training, and the structured perceptron. Examine structured max-margin objectives and simple remedies to exposure bias. Learn about globally normalized models, sampling and beam search techniques, and various structured training approaches including hinge loss, cost-augmented hinge loss, and contrastive learning. Gain insights into teacher forcing, self-training, and evaluation metrics for structured prediction tasks in NLP.
Syllabus
Introduction
Types of prediction
Teacher forcing
Evaluation metrics
Structured prediction
Reminder
Globally normalized models
Sampling and Beam Search
Structured Perceptron
Global Structured Perceptron
Structured Training Pretraining
Hinge Loss
Cost Augmented Hinge Loss
Cost Over Sequences
Structured Hinge Loss
Label Smoothing vs Hinge Loss
Contrastive Learning
Teacher Forced
Selftraining
Taught by
Graham Neubig