From Paper to Product - How We Implemented BERT
MLCon | Machine Learning Conference via YouTube
Overview
Syllabus
Introduction
About the talk
How did it start
What happens when the customer is waited
Good old AI
Visionary
English
German
Product
Customer Feedback
Part of Speech Tagging
Architecture
Stanford NLP
Reinventing the wheel
What is BERT
What is supervised training
Unsupervised learning
Unsupervised training
Semisupervised training
Input structure
Transformer block
What is good
What is bad
How did we do it
BERT parameter tuning
Preprocessing
Word pieces
Morphemes vs morphs
Dictionary size
How does it look
What did we save
Training
Training Results
Future Plans
Backend
ZukaText
Whats next
Taught by
MLCon | Machine Learning Conference