Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about vector semantics and word embeddings in this comprehensive lecture covering fundamental concepts of natural language processing. Explore representation learning techniques and dive deep into various aspects of word meaning. Understand the distributional hypothesis and its importance in modern NLP applications. Master the principles of vector semantics and the influential word2vec model for creating word embeddings. Conclude by examining practical applications including visualization techniques, solving word analogies, and understanding potential biases in embedding models.
Syllabus
Lecture starts
Representation learning
Aspects of word meaning
Distributional hypothesis
Vector semantics
word2vec
Visualization, analogies, bias
Taught by
UofU Data Science