This course extends the material from the first course on Creative Applications of Deep Learning, providing an updated landscape on the state of the art techniques in recurrent neural networks. We begin by recapping what we've done up until now and show how to extend our practice to the cloud where we can make use of much better hardware including state-of-the- art GPU clusters. We'll also see how the models we train can be deployed for production environments. The techniques learned here will give us a much stronger basis for developing even more advanced algorithms in the final course of the program. We then move on to some state-of-the-art developments in Deep Learning, including adding recurrent networks to a variational autoencoder in order to learn where to look and write. We also look at how to use neural networks to model parameterized distributions using a mixture density network. Finally, we look at a recent development in Generative Adversarial Networks capable of learning how to translate unpaired image collections so that each collection looks like the other one. Along the way, we develop a firm understanding in theory and code about some of the components in each of these architectures that make them possible.
Overview
Syllabus
Session 1: Cloud Computing, GPUs, Deploying
This session recaps the techniques learned in Course 1 and then goes on to describe how to setup an environment for learning on the cloud. Then shows how to use a pre-trained network in a production environment using a simple RESTful API using a Python flask web application. Session 2: Mixture Density Networks, Handwriting Synthesis
This session covers a technique for predicting distributions of data called the mixture density network. We covers its importance and use case in the recurrent modeling of handwriting from x,y positions. Session 3: Modeling Attention With RNNs, DRAW
This session shows how to model one of the most fundamental aspects to intelligence: attention. We'll see how we can teach an autoencoding neural network where to look and where to decode. This will greatly simplify the amount of information that it needs to learn by conditioning on previous time steps, all while gaining an enormous amount of expressivity. Session 4: PixelCNN And PixelRNN, Generative Images
This session develops an understanding for a major breakthrough in convolutional networks: dilated/atrous convolution.
This session recaps the techniques learned in Course 1 and then goes on to describe how to setup an environment for learning on the cloud. Then shows how to use a pre-trained network in a production environment using a simple RESTful API using a Python flask web application. Session 2: Mixture Density Networks, Handwriting Synthesis
This session covers a technique for predicting distributions of data called the mixture density network. We covers its importance and use case in the recurrent modeling of handwriting from x,y positions. Session 3: Modeling Attention With RNNs, DRAW
This session shows how to model one of the most fundamental aspects to intelligence: attention. We'll see how we can teach an autoencoding neural network where to look and where to decode. This will greatly simplify the amount of information that it needs to learn by conditioning on previous time steps, all while gaining an enormous amount of expressivity. Session 4: PixelCNN And PixelRNN, Generative Images
This session develops an understanding for a major breakthrough in convolutional networks: dilated/atrous convolution.
Taught by
Parag Mital
Reviews
4.7 rating, based on 3 Class Central reviews
Showing Class Central Sort
-
This course is brilliant continuation of Part 1.
Unfortunately it has been decommissioned by Kadenze because the author moved on.
My hope is that kadenze offers this course as a free one with the authors other courses (Part one and three ) or allows the author to put them up online for everyone to enjoy and benefit from .
To knowledge,
b
-
Come on Kadenze guys, open this course again at a reasonable price, if Parag doesn't continue with you, no problem, we want to learn!!
-