CNN Applications, RNN, and Attention
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore advanced deep learning concepts in this comprehensive lecture covering CNN applications, RNN architectures, and attention mechanisms. Dive into word-level training with minimal supervision, face detection, and semantic segmentation using convolutional neural networks. Learn about ConvNets for long-range adaptive robot vision and scene parsing. Examine recurrent neural networks, their challenges, and techniques to address them, including attention, GRUs, LSTMs, and Seq2Seq models. Gain insights into memory networks and their applications in various deep learning tasks. Benefit from expert explanations and practical examples to enhance your understanding of these cutting-edge machine learning techniques.
Syllabus
– Week 6 – Lecture
– Word-level training with minimal supervision
– Face Detection and Semantic Segmentation
– ConvNet for Long Range Adaptive Robot Vision and Scene Parsing
– Recurrent Neural Networks and Attention Mechanisms
– GRUs, LSTMs, and Seq2Seq Models
– Memory Networks
Taught by
Alfredo Canziani