Deep Learning for Natural Language Processing
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the foundations and advanced concepts of deep learning for Natural Language Processing (NLP) in this comprehensive lecture. Delve into various architectures used in NLP applications, including CNNs, RNNs, and the state-of-the-art transformer model. Understand the modules that make transformers advantageous for NLP tasks and learn effective training techniques. Discover beam search as a middle ground between greedy decoding and exhaustive search, and explore "top-k" sampling for text generation. Examine sequence-to-sequence models, back-translation, and unsupervised learning approaches for embedding, including word2vec, GPT, and BERT. Gain insights into pre-training techniques for NLP and future directions in the field.
Syllabus
– Week 12 – Lecture
– Introduction to deep learning in NLP and language models
– Transformer language model structure and intuition
– Some tricks and facts of Transformer Language Models and decoding Language Models
– Beam Search, Sampling and Text Generation
– Back-translation, word2vec and BERT's
– Pre-training for NLP and Next Steps
Taught by
Alfredo Canziani