Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
In this course, you will:
a) Learn neural style transfer using transfer learning: extract the content of an image (eg. swan), and the style of a painting (eg. cubist or impressionist), and combine the content and style into a new image.
b) Build simple AutoEncoders on the familiar MNIST dataset, and more complex deep and convolutional architectures on the Fashion MNIST dataset, understand the difference in results of the DNN and CNN AutoEncoder models, identify ways to de-noise noisy images, and build a CNN AutoEncoder using TensorFlow to output a clean image from a noisy one.
c) Explore Variational AutoEncoders (VAEs) to generate entirely new data, and generate anime faces to compare them against reference images.
d) Learn about GANs; their invention, properties, architecture, and how they vary from VAEs, understand the function of the generator and the discriminator within the model, the concept of 2 training phases and the role of introduced noise, and build your own GAN that can generate faces.
The DeepLearning.AI TensorFlow: Advanced Techniques Specialization introduces the features of TensorFlow that provide learners with more control over their model architecture, and gives them the tools to create and train advanced ML models.
This Specialization is for early and mid-career software and machine learning engineers with a foundational understanding of TensorFlow who are looking to expand their knowledge and skill set by learning advanced TensorFlow features to build powerful models.
Syllabus
- Week 1: Style Transfer
- This week, you will learn how to extract the content of an image (such as a swan), and the style of a painting (such as cubist, or impressionist), and combine the content and style into a new image. This is called neural style transfer, and you'll learn how to extract these kinds of features using transfer learning.
- Week 2: AutoEncoders
- This week, you’ll get an overview of AutoEncoders and how to build them with TensorFlow. You'll learn how to build a simple AutoEncoder on the familiar MNIST dataset, before diving into more complicated deep and convolutional architectures that you'll build on the Fashion MNIST dataset. You'll get to see the difference in results of the DNN and CNN AutoEncoder models, and then identify ways to denoise noisy images. You'll finish the week building a CNN AutoEncoder using TensorFlow to output a clean image from a noisy one!
- Week 3: Variational AutoEncoders
- This week you will explore Variational AutoEncoders (VAEs) to generate entirely new data. In this week’s assignment, you will generate anime faces and compare them against reference images.
- Week 4: GANs
- This week, you’ll learn about GANs. You'll learn what they are, who invented them, their architecture and how they vary from VAEs. You'll get to see the function of the generator and the discriminator within the model, and the concept of 2 training phases and the role of introduced noise. Then you'll end the week building your own GAN that can generate faces! How cool is that!
Taught by
Laurence Moroney