Completed
Selective Parameter Adaptation Sometimes it is better to adapt only some of the parameters
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Neural Nets for NLP - Multi-task, Multi-lingual Learning
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Remember, Neural Nets are Feature Extractors!
- 3 Types of Learning
- 4 Plethora of Tasks in NLP
- 5 Rule of Thumb 1: Multitask to Increase Data
- 6 Rule of Thumb 2
- 7 Standard Multi-task Learning
- 8 Examples of Pre-training Encoders . Common to pre-train encoders for downstream tasks, common to use
- 9 Regularization for Pre-training (e.g. Barone et al. 2017) Pre-training relies on the fact that we won't move too far from the
- 10 Selective Parameter Adaptation Sometimes it is better to adapt only some of the parameters
- 11 Soft Parameter Tying
- 12 Supervised Domain Adaptation through Feature Augmentation
- 13 Unsupervised Learning through Feature Matching
- 14 Multilingual Structured Prediction/ Multilingual Outputs • Things are harder when predicting a sequence of actions (parsing) or words (MT) in different languages
- 15 Multi-lingual Sequence-to- sequence Models
- 16 Types of Multi-tasking
- 17 Multiple Annotation Standards
- 18 Different Layers for Different
- 19 Summary of design dimensions