Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Pennsylvania

Deep Learning Essentials

University of Pennsylvania via Coursera

Overview

Delve into the history of deep learning, and explore neural networks like the perceptron, how they function, and what architectures underpin them. Complete short coding assignments in Python.

Syllabus

  • Module 1: History of Deep Learning
    • In this module, we'll first peek through history, talk about the different ways in which people have attempted to build artificial intelligences in the past and explore what intelligence is made up of. Then, we'll start our investigation into an early model called the perceptron.
  • Module 2: Perceptron, Stochastic Gradient Descent & Kernel Methods
    • This module, we will continue exploring the perceptron. We'll delve into stochastic gradient descent (SGD), a fundamental optimization technique that enables the perceptron, and other models, to learn from data by iteratively updating the model's parameters to minimize errors. Afterward, we will look at kernel methods. These techniques can separate two sets of points in more complicated ways, drawing inspiration from how the human eye works.
  • Module 3: Fully Connected Networks
    • This module, we will move to exploring fully-connected networks. These networks are sophisticated models that can be thought of as a perceptron sitting on top of another perceptron, continuing in such a fashion. Each layer in a fully-connected network takes inputs from the layer below it, working to separate data points (such as the red and the blue scattered points) a little better than the one before it, and then passes it on to the next layer.
  • Module 4: Backpropagation
    • We will finish this course by looking at backpropagation, which is an algorithm to train neural networks to find the best set of weights that minimize error on the data. Backpropagation applies the chain rule from calculus to efficiently calculate gradients of the loss function with respect to the weights, enabling the model to update its weights in the opposite direction of the gradient. We'll discuss the importance of typical datasets consisting of images, sentences, and sounds, and how neural networks can learn from the spatial regularities present in such data.

Taught by

Chris Callison-Burch and Pratik Chaudhari

Reviews

Start your review of Deep Learning Essentials

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.