Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Learning Neural Network Hyperparameters for Machine Translation - 2019

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Explore neural network hyperparameter optimization for machine translation in this 52-minute conference talk by Kenton Murray, a PhD candidate at the University of Notre Dame. Dive into methods for improving hyperparameter selection without extensive grid searches, focusing on techniques that learn optimal parameters during the training process. Examine common regularization techniques, objective functions, and proximal gradient methods. Analyze experiments in 5-gram language modeling and auto-sizing transformer layers. Discover key takeaways about the non-universality of optimal hyperparameters and the potential of perceptron tuning for beam search. Gain insights into implementing these techniques in PyTorch and their application to low-resource and morphologically rich language pairs.

Syllabus

Intro
Statistical Machine Translation
Motivation
Grid Search
Method Overview
Common Regularization
Objective Function
Proximal Gradient Methods
Experiments: 5-gram Language Modeling
5-gram Perplexity
Behavior During Training
Key Takeaways
Optimal Hyperparameters Not Universal
Auto-Sizing Transformer Layers
Pytorch Implementation
Beam Search
Perceptron Tuning
Experiment: Tuned Reward
Questions?

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Learning Neural Network Hyperparameters for Machine Translation - 2019

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.