Overview
This course introduces you to two of the most sought-after disciplines in Machine Learning: Deep Learning and Reinforcement Learning. Deep Learning is a subset of Machine Learning that has applications in both Supervised and Unsupervised Learning, and is frequently used to power most of the AI applications that we use on a daily basis. First you will learn about the theory behind Neural Networks, which are the basis of Deep Learning, as well as several modern architectures of Deep Learning. Once you have developed a few Deep Learning models, the course will focus on Reinforcement Learning, a type of Machine Learning that has caught up more attention recently. Although currently Reinforcement Learning has only a few practical applications, it is a promising area of research in AI that might become relevant in the near future.
After this course, if you have followed the courses of the IBM Specialization in order, you will have considerable practice and a solid understanding in the main types of Machine Learning which are: Supervised Learning, Unsupervised Learning, Deep Learning, and Reinforcement Learning.
By the end of this course you should be able to:
Explain the kinds of problems suitable for Unsupervised Learning approaches
Explain the curse of dimensionality, and how it makes clustering difficult with many features
Describe and use common clustering and dimensionality-reduction algorithms
Try clustering points where appropriate, compare the performance of per-cluster models
Understand metrics relevant for characterizing clusters
Who should take this course?
This course targets aspiring data scientists interested in acquiring hands-on experience with Deep Learning and Reinforcement Learning.
Â
What skills should you have?
To make the most out of this course, you should have familiarity with programming on a Python development environment, as well as fundamental understanding of Data Cleaning, Exploratory Data Analysis, Unsupervised Learning, Supervised Learning, Calculus, Linear Algebra, Probability, and Statistics.
Syllabus
- Introduction to Neural Networks
- This module introduces Deep Learning, Neural Networks, and their applications. You will go through the theoretical background and characteristics that they share with other machine learning algorithms, as well as characteristics that make them stand out as great modeling techniques for specific scenarios. You will also gain some hands-on practice on Neural Networks and key concepts that help these algorithms converge to robust solutions.
- Back Propagation Training and Keras
- In this module, you will learn about the maths behind the popular Back Propagation algorithm used to optimize neural networks. In the Back Propagation notebook, you will also see and understand the use of activation functions. The main purpose of most activation function is to introduce non-linearity in the network so it would be capable of learning more complex patterns. Last, but not least, you will learn to use functions and APIs from the Keras library to solve tasks that involve neural networks, and these tasks start with loading images.
- Neural Network Optimizers
- You can leverage several options to prioritize the training time or the accuracy of your neural network and deep learning models. In this module you learn about key concepts that intervene during model training, including optimizers and data shuffling. You will also gain hands-on practice using Keras, one of the go-to libraries for deep learning.Â
- Convolutional Neural Networks
- In this module you become familiar with convolutional neural networks, also known as space invariant artificial neural networks, a type of deep neural networks, frequently used in image AI applications. There are several CNN architectures, you will learn some of the most common ones to add to your toolkit of Deep Learning Techniques.
- Transfer Learning
- In this module, you will understand what is transfer learning and how it works. You will implement transfer learning in 5 general steps using a variety of popular pre-trained CNN architectures, such as VGG-16 and ResNet-50. You will study the differences among those CNN architectures and see how the invention of each solves the problem of its predecessors. Last, but not least, as we are moving to working with deeper neural networks, you will also be equipped with regularization techniques to prevent overfitting of complex models and networks.
- Recurrent Neural Networks and Long-Short Term Memory Networks
- In this module you become familiar with Recursive Neural Networks (RNNs) and Long-Short Term Memory Networks (LSTM), a type of RNN considered the breakthrough for speech to text recongintion. RNNs are frequently used in most AI applications today, and can also be used for supervised learning.Â
- Autoencoders
- In this module you become familiar with Autoencoders, an useful application of Deep Learning for Unsupervised Learning. Autoencoders are a neural network architecture that forces the learning of a lower dimensional representation of data, commonly images. In this module you will learn some Deep learning-based techniques for data representation, how autoencoders work, and to describe the use of trained autoencoders for image applications
- Generative Models and Applications of Deep Learning
- In this module, you will learn about two types of generative models, which are Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). We will look at the theory behind each model and then implement them in Keras for generating artificial images. The goal is usually to generate images that are as realistic as possible. In the last lesson of this module, we will touch on additional topics in deep learning, namely using Keras in a GPU environment for speeding up model training.
- Reinforcement Learning
- In this module you become familiar with other novel applications of Neural Networks. You will learn about Generative Adversarial Networks, frequently referred to as GANs, which are an application of Neural Networks to generate new data. Finally, you learn about Reinforcement Learning, one of the big promises for A.I., based on training algorithms by using rewards, instead of using a method to minimize error, which is what we have been using throughout the course.
Taught by
Mark J Grover and Miguel Maldonado