Completed
Dealing with disparity in actions Ellective Inference for Generative Neural Parsing Mitchell Stam et al., 2017
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Neural Nets for NLP 2019 - Advanced Search Algorithms
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Why search?
- 3 Basic Pruning Methods (Steinbiss et al. 1994)
- 4 Prediction-based Pruning Methods (e.g. Stern et al. 2017)
- 5 Backtracking-based Pruning Methods
- 6 What beam size should use?
- 7 Variable length output sequences . In many tasks (eg MT), the output sequences will be of variable length
- 8 More complicated normalization Google's Neural Machine Translation System Bridging the Gap
- 9 Predict the output length (Eriguchi et al. 2016)
- 10 Why do Bigger Beams Hurt, pt. 2
- 11 Dealing with disparity in actions Ellective Inference for Generative Neural Parsing Mitchell Stam et al., 2017
- 12 Solution
- 13 Improving Diversity in top N Choices
- 14 Improving Diversity through Sampling
- 15 Sampling without Replacement (con't)
- 16 Monte-Carlo Tree Search Human-like Natural Language Generation Using Monte Carlo Tree Search
- 17 More beam search in training A Continuous Relaxation of Bear Search for End-to-end Training of Neural Sequence Models (Goyal et al., 2017)
- 18 Adoption with neural networks: CCG Parsing
- 19 Is the heuristic admissible? (Lee et al. 2016)
- 20 Estimating future costs Li et al., 2017
- 21 Actor Critic (Bahdanau et. al., 2017)
- 22 Actor Critic (continued)
- 23 A* search: benefits and drawbacks
- 24 Particle Filters (Buys et al., 2015)
- 25 Reranking (Dyer et al. 2016)