Learn advanced techniques like word embeddings, deep learning attention, and more. Build a machine translation model using recurrent neural network architectures.
Overview
Syllabus
- Introduction to Computing With Natural Language
- An introduction of the course outline and prerequisite.
- Feature extraction and embeddings
- Transform text using methods like Bag-of-Words, TF-IDF, Word2Vec and GloVE to extract features that you can use in machine learning models.
- Topic Modeling
- In this section, you'll learn to split a collection of documents into topics using Latent Dirichlet Analysis (LDA). In the lab, you'll be able to apply this model to a dataset of news articles.
- Sentiment Analysis
- Learn about using several machine learning classifiers, including Recurrent Neural Networks, to predict the sentiment in text. Apply this to a dataset of movie reviews.
- Sequence to Sequence
- Here you'll learn about a specific architecture of RNNs for generating one sequence from another sequence. These RNNs are useful for chatbots, machine translation, and more!
- Deep Learning Attention
- Attention is one of the most important recent innovations in deep learning. In this section, you'll learn attention, and you'll go over a basic implementation of it in the lab.
- RNN Keras Lab
- This section will prepare you for the Machine Translation project. Here you will get hands-on practice with RNNs in Keras.
- Project: Machine Translation
- Apply the skills you've learned in Natural Language Processing to the challenging and extremely rewarding task of Machine Translation.
Taught by
Luis Serrano, Jay Alammar and Arpan Chakraborty