Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about language modeling fundamentals in this comprehensive lecture covering key concepts from padding techniques to advanced neural language models. Begin with a review of assignments before diving into padding methodologies and exploring the limitations of static embeddings, random initialization, and bag-of-words approaches. Trace the evolution of transformer models through to RLHF (Reinforcement Learning from Human Feedback), followed by an in-depth examination of N-gram language models. Conclude with an exploration of neural language models and their applications in modern natural language processing.
Syllabus
Recap / Assignments
Padding
Limitations of static embeddings & random init & BoW
Transformers to RLHF timeline
N-gram LMs
Neural LMs
Taught by
UofU Data Science