10 Best Deep Learning Courses for 2024
Deep learning is all the rage these days. So I’ve compiled the best deep learning courses available online.
Deep learning is a machine learning technique that uses artificial neural networks to learn from data. And it’s all the rage these days!
In this Best Courses Guide, I’ve leveraged Class Central’s catalog of over 200K courses to find the best deep learning courses available online.
Click on the shortcuts for more details:
- Top Picks
- What is Deep Learning?
- Courses Overview
- Course Selection Methodology and Why You Should Trust Us
Here are my top picks — click on a course to skip to the details:
Course Highlight | Workload |
Best Overall Deep Learning Course for Beginners (DeepLearning.AI) | 24 hours |
Rigorous and Exciting Deep Learning Course (MIT) | 12-55 hours |
Challenging and Comprehensive Advanced Deep Learning Course (NYU) | 45 hours |
Amazing Deep Learning Intro with PyTorch (Facebook) | 8 modules |
Comprehensive Deep Learning Course with an Emphasis on NLP (fast.ai) | 70 hours |
Deep Learning Course that Teaches You Enough to Get Started (IBM) | 8 hours |
Deep Learning Basics with Free Certificate (Jovian) | 48-72 hours |
Intermediate Level Deep Learning Course Focusing on Probabilistic Models (Imperial) | 52 hours |
Most Comprehensive Course for Machine Learning and Deep Learning (MIT) | 150–210 hours |
Deep Learning Course with Emphasis on Computer Vision (CU Boulder) | 25–40 hours |
What is Deep Learning?
To explain what deep learning is, let me first explain what machine learning is.
Machine learning involves enabling computers to learn from data largely on their own. For instance, you may implement a machine learning algorithm that can distinguish pictures of dogs from pictures of cats. Initially, the algorithm may not be very good at it. But as you train the algorithm by giving it examples of cats and dogs, it will learn to distinguish them.
Since the ability to ‘learn’ is considered a sign of intelligence, machine learning is hence a part of artificial intelligence. And deep learning is a subset of machine learning. It has the same goal as machine learning (to make computers learn) but approaches the problem with neural networks. Now, what are neural networks?
To put it simply, neural networks are made of neurons — but not neurons in the biological sense; we’re talking about software neurons. We call them like that because they behave similarly to neurons in our brains — that is, they can receive, process, and pass information.
Neural networks aim to replicate through artificial intelligence this biological mechanism that allows the powerful mushy computer in our heads to see, speak, hear, and react. As it turns out, programming a computer to simulate some brain mechanism works! Deep learning is what powers translation software, virtual assistants, deepfakes, self-driving cars, and a whole lot more.
You can see why companies are scrambling to find and incorporate AI and deep learning solutions into their products. In fact, the global deep learning market is expected to reach up to $528 billion by 2030. And artificial intelligence is mentioned dozens of times in the WEF’s Future of Jobs Report, denoting its importance in the jobs of tomorrow.
Courses Overview
- All of these courses are free or free to audit
- Six courses offer a certificate of completion (one is free)
- Around 181K people are following Deep Learning Courses on Class Central
- There are more than 2,200 courses in the Deep Learning subject.
Now, let’s get to the top picks!
Best Overall Deep Learning Course for Beginners (DeepLearning.AI)
If you want to break into cutting-edge AI, Neural Networks and Deep Learning will help you do so.
This course is a great option to gain a fundamental understanding of deep learning and it is taught by none other than Andrew Ng — a prominent figure in the world of machine learning. The course teaches you how deep learning actually works, rather than presenting only a surface-level description.
By the end of this course you’ll understand the major technology trends driving deep learning and you’ll be able to build, train and apply fully connected deep neural networks.
This course is for motivated students with some understanding of classical machine learning and for early-career software engineers or technical professionals looking to master the fundamentals and gain practical machine learning and deep learning skills.
Neural Networks and Deep Learning is the first course of the Deep Learning Specialization. The specialization will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. The second course in this series is Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization.
In this course, you’ll learn:
- Introduction to deep learning through examples
- Learn supervised learning and its relation to deep learning
- Explore three major trends: data, computation, and algorithms
- List and discuss major model categories: convolutional and recurrent neural networks, with appropriate use cases
- Basics of neural network programming using Python and NumPy in Jupyter Notebooks
- Solve a machine learning problem with neural networks and use vectorization for speed
- Learn key concepts: backpropagation, cost function, and gradient descent
- Build single hidden layer (shallow) neural networks
- Build and train 2-layer (deep) neural networks for computer vision tasks: identifying pictures of cats.
Andrew Ng is the co-founder and head of Google Brain and was the former chief scientist at Baidu. He also co-founded Coursera, before creating DeepLearning.AI.
Institution | DeepLearning.AI |
Provider | Coursera |
Instructor | Andrew Ng |
Level | Intermediate |
Workload | 24 hours |
Enrollments | 1.3M |
Rating | 4.9 / 5.0 (121K) |
Cost | Free audit |
Certificate | Paid |
Rigorous and Exciting Deep Learning Course (Massachusetts Institute of Technology)
As I write this update, the 2024 lecture videos are still being added to MIT’s 6.S191: Introduction to Deep Learning.
Running yearly since 2017 and open to all for registration, MIT’s introductory course will teach you the foundational knowledge of deep learning as well as help you gain practical experience in building neural networks in TensorFlow. You’ll learn how deep learning methods relate to applications like computer vision, natural language processing, biology, and more!
In the rapidly-changing world of AI and deep learning, the course syllabus changes each year, but you can also access past videos in the playlist. Check the year labels on the titles.
You can find more 2024 content including slides, projects, labs, and code on the course website.
The course assumes elementary knowledge of linear algebra and calculus (e.g: matrix multiplication and derivatives). Experience in Python is also helpful but not necessary. If you want to learn or brush up on Python, check out my Python Courses BCG.
The course begins with… a welcome speech by Obama?!? Oh wait, it’s actually Alexander Amini, the course instructor, somehow portraying himself as Obama! What a wonderful introduction to the field of deep learning!
The course also covers:
- Introduction to Deep Learning
- Recurrent Neural Networks, Transformers, and Attention
- Convolutional Neural Networks
- Deep Generative Modeling
- Reinforcement Learning
- Language Models and New Frontiers
- Generative AI for Media
- Building AI Models in the Wild
The course’s high production values are reminiscent of Harvard’s CS50. There’s a huge team behind the course!
Institution | Massachusetts Institute of Technology |
Provider | MITOpenCourseWare |
Instructors | Alexander Amini and Ava Soleimany |
Level | Beginner |
Workload | 12-55 hours |
Views | 3.1M |
Cost | Free |
Challenging and Comprehensive Advanced Deep Learning Course (New York University)
NYU Deep Learning discusses techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition.
The course is taught by none other than Yann LeCun, a prominent figure in machine learning, and considered the father of convolutional neural networks. You’re learning from the very best here. Here’s the course website, where you can find extra resources — GitHub notes, Jupyter notebooks, the works!
The prerequisite for this course is DS-GA 1001 Intro to Data Science or a graduate-level machine learning course.
What you’ll learn:
- This course has 8 themes: Introduction, Parameters Sharing, Energy-Based Models (Foundations & Advanced), Associative Memories, Graphs, Control, Optimisation
- Delve into the history and differences between machine learning and deep learning
- Discuss motivations and mathematical principles of neural networks (chain rule derivation, backpropagation, gradient descent)
- Learn about recurrent and convolutional neural networks and their applications
- Introduce Energy-Based Model approach for supervised, unsupervised, and self-supervised models.
- Explore autoencoders, GANs, transformers, and more
- Study speech recognition using graph transformer networks and graph theory concepts
- Learn algorithms including beam search for speech recognition
- Study planning, control, and optimization, focusing on stochastic gradient descent.
I have to say it again: you’re learning from the best here. Yann LeCun’s reputation in the world of machine learning and deep learning can’t be overstated.
Institution | New York University |
Provider | YouTube |
Instructors | Yann LeCun and Alfredo Canziani |
Level | Intermediate |
Workload | 45 hours |
Views | 187K |
Cost | Free |
Certificate | None |
Amazing Deep Learning Intro with PyTorch (Facebook)
Deep learning is driving the AI revolution and PyTorch is making it easier than ever for anyone to build deep learning applications.
Intro to Deep Learning with PyTorch aims to teach you the basics of deep learning, and build your own deep neural networks using PyTorch.
You’ll gain practical experience building and training deep neural networks through coding exercises and projects. And you’ll implement state-of-the-art AI applications using style transfer and text generation.
To succeed in this course, you’ll need to be comfortable with Python and data processing libraries such as NumPy and Matplotlib. Basic knowledge of linear algebra and calculus is recommended, but isn’t required to complete the exercises. You’ll apply what you’ve been taught by coding in the various Jupyter notebooks provided. The exercises are easy to follow with walkthrough solutions.
This course covers:
- Introduction to deep learning with PyTorch
- Learn basic concepts: neural networks and gradient descent
- Use NumPy to create a neural network for predicting student admissions
- Transition to programming with PyTorch
- Interview with PyTorch creator Soumith Chintala
- Focus on computer vision with convolutional neural networks (CNNs) for image classification
- Learn neural style transfer to merge image styles and contents along the lines of the influential paper A Neural Algorithm of Artistic Style
- Use recurrent neural networks (RNNs) for sequential data in text
- Implement RNNs for text generation based on Tolstoy’s Anna Karenina, and sentiment prediction of text and movie reviews
- Deploy deep learning models with PyTorch, exporting models to C++
- Create and deploy a chatbot in a production environment.
Luis Serrano, the lead instructor, has worked at Google, Apple, and Udacity. He has quite the resume!
Institution | |
Provider | Udacity |
Instructors | Luis Serrano, Alexis Cook, Soumith Chintala, Cezanne Camacho and Mat Leonard |
Level | Intermediate |
Workload | 8 modules |
Cost | Free |
Certificate | None |
Comprehensive Deep Learning Course with an Emphasis on NLP (Fast.AI)
Practical Deep Learning For Coders starts from step one— learning how to get a GPU server online suitable for deep learning — and goes all the way through to creating state-of-the-art models for computer vision, natural language processing, recommendation systems, and more!
In this course you’ll also learn how to use the libraries PyTorch and fastai. PyTorch works best as a low-level library, while fastai adds a higher-level functionality on top of PyTorch.
One cool thing about this course is that it teaches you how to set up a cloud GPU to train models on, as well as how to use Jupyter Notebooks to write and experiment with code.
This course is designed for anyone with at least a year of coding experience (preferably in Python) and some memory of high-school math.
What you’ll learn:
- Introduction to deep learning, its relation to machine learning and computer programming
- Learn critical machine learning topics and create production-ready models, including confidence intervals and project planning
- Discuss data augmentation and the math and code of stochastic gradient descent
- Build and deploy GUI apps for notebooks and standalone web applications
- Train a neural network from scratch, learning about the sigmoid function, arrays, tensors, and PyTorch training loops
- Explore data ethics with case studies
- Dive into the softmax activation function and multi-label problems
- Learn decision trees and ensembles (random forests, gradient boosting machines), their limitations, and solutions
- Compete on Kaggle with decision tree approaches
- Study natural language processing (NLP), including tokenization, numericalization, word embedding, and recurrent neural networks
- Learn tricks to improve NLP model results.
Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD is the accompanying book for this course. It is also freely available as a series of Jupyter Notebooks.
Practical Deep Learning for Coders part 2: Deep Learning Foundations to Stable Diffusion (30 hours) is a follow-up course.
Institution | fast.ai |
Instructor | Jeremy Howard |
Level | Intermediate |
Workload | 70 hours |
Cost | Free |
Certificate | None |
Deep Learning Course that Teaches You Enough to Get Started (IBM)
Introduction to Deep Learning & Neural Networks with Keras introduces you to the field of deep learning. This course will not teach you everything about deep learning, but it will teach you just enough to get started on more advanced courses and learn independently.
You will learn about the different deep learning models and build your first deep learning model using the Keras library. By the end of this course, you’ll be able to describe neural networks and deep learning models, understand unsupervised deep learning models (autoencoders and restricted Boltzmann machines) and supervised deep learning models (convolutional neural networks and recurrent networks), and most importantly, build deep learning models and networks using the Keras library.
To take this course you should have some Python programming knowledge and a little experience with machine learning.
In this course:
- Explore deep learning applications: restoring color in photos and synthesizing audio
- Learn about biological and artificial neural networks, and forward propagation
- Study the gradient descent algorithm and optimization of variables
- Understand backpropagation for neural network learning, weight, and bias updates
- Learn about activation functions
- Practical part: focus on Keras and deep learning libraries (Keras, PyTorch, TensorFlow)
- Differentiate between shallow and deep neural networks
- Learn about convolutional networks and build them with Keras
- Study recurrent neural networks and autoencoders
- Final project (not available to auditing learners): build a regression model with Keras, experimenting with model depth and width.
This course is part of the 6-course IBM AI Engineering Professional Certificate, which is designed to equip you with the tools you need to succeed in your career as an AI or ML engineer. The course prior to this is Machine Learning with Python.
Institution | IBM |
Provider | Coursera |
Instructor | Alex Aklson |
Level | Intermediate |
Workload | 8 hours |
Enrollments | 53.2K |
Rating | 4.7 / 5.0 (1.5K) |
Cost | Free audit |
Certificate | Paid |
Deep Learning Basics with Free Certificate (Jovian)
Although Deep Learning with PyTorch: Zero to GANs is suitable for beginners, it is recommended to have some programming knowledge, preferably in Python, knowledge of the basics of linear algebra (vectors, matrices, dot products) as well as the basics of calculus (differentiation, geometric interpretation of derivative). It provides a coding-first introduction to deep learning using the PyTorch framework.
The course is called “Zero to GANs” because it assumes no prior knowledge of deep learning (i.e. you can start from zero), and by the end of the six weeks, you’ll be familiar with building Generative Adversarial Networks or GANs.
What you’ll learn:
- Basics of PyTorch: tensors, gradients, and autograd
- Implement linear regression and gradient descent from scratch using PyTorch
- Work with the MNIST dataset to determine handwritten digits
- Perform training-validation split and learn logistic regression
- Train, evaluate, and sample predictions from your model
- Create a deep neural network with hidden layers and non-linear activations
- Use cloud-based GPUs for training deep neural networks and hyperparameter tuning
- Learn convolutional neural networks (CNNs) for superior image classification
- Key CNN concepts: convolutions, residual connections, batch normalization, avoiding underfitting & overfitting
- Explore Generative Adversarial Networks (GANs) to train generator and discriminator networks
- Build applications with GANs: generating fake digits and anime faces.
The course uses a hands-on approach by allowing you to follow along and experiment with code in Jupyter Notebooks. Regarding assessments, you’ll receive weekly assignments and work on various projects with real-world datasets to hone your skills.
Jovian also offers many other Python courses related to Data Science, including Data Analysis with Python and Machine Learning with Python.
Institution | Jovian |
Instructor | Aakash N S |
Level | Beginner |
Workload | 48-72 hours |
Enrollments | 25.5K |
Cost | Free |
Certificate | Free |
Intermediate Level Deep Learning Course Focusing on Probabilistic Models (Imperial College London)
You will learn how to develop probabilistic models with TensorFlow in Probabilistic Deep Learning with TensorFlow 2, making particular use of the TensorFlow Probability library. The course builds on the foundational concepts of TensorFlow 2 and focuses on the probabilistic approach to deep learning — getting the model to know what it doesn’t know.
This course is a continuation of the previous two courses in the TensorFlow 2 for Deep Learning specialization, Getting started with TensorFlow 2 and Customising your models with TensorFlow 2. Additional prerequisite knowledge required for this course is a solid foundation in probability and statistics (e.g: standard probability distributions, probability density functions, maximum likelihood estimation).
This course covers:
- TensorFlow Distributions:
- Learn TensorFlow Probability (TFP) for probabilistic modeling
- Use Distribution objects in TFP to sample, compute probabilities, and create trainable distributions
- Probabilistic Layers:
- Develop deep learning models with probabilistic layers to measure uncertainty in data and models
- Apply these techniques to critical applications like medical diagnoses
- Normalizing Flows:
- Use bijector objects in TFP to implement normalizing flows
- Model data distribution by transforming a simple base distribution
- Sample new data generations and evaluate data example likelihoods
- Variational Autoencoders (VAE):
- Implement VAEs using TFP
- Train encoder (inference network) and decoder (generative network) jointly
- Encode data into a compressed latent space and generate new samples
- Capstone Project:
- Create a synthetic image dataset using normalizing flows
- Train a VAE on the dataset
- Integrate and apply concepts from previous modules, demonstrating proficiency in probabilistic deep learning with TFP.
Note: assessments are only available for paying learners. In the assessments, you’ll apply the concepts that you learn about into practice through practical, hands-on coding tutorials. In addition, there is a series of automatically graded programming assignments for you to consolidate your skills.
Institution | Imperial College London |
Provider | Coursera |
Instructor | Kevin Webster |
Level | Advanced |
Workload | 52 hours |
Enrollments | 13.2K |
Rating | 4.7 / 5.0 (99) |
Cost | Free audit |
Certificate | Paid |
Most Comprehensive Course for Machine Learning and Deep Learning (Massachusetts Institute of Technology)
Have you looked through all the previous courses and realized that you do not have a foundational grasp of machine learning, but do want to eventually learn about deep learning? If you’re up for the challenge (and are willing to stomach some mathematics), this rigorous MIT course is for you!
Machine Learning with Python: from Linear Models to Deep Learning introduces you to the field of machine learning, from linear models to deep learning and reinforcement learning, with a hands-on approach. You’ll implement and experiment with the algorithms in several Python projects designed for different practical applications.
To be successful in this course, you should be proficient in Python programming (6.00.1x), as well as probability theory (6.431x), college-level single and multivariable calculus, and vectors and matrices.
What you’ll learn:
- Introduction:
- Brief review of linear algebra and probability
- Introduction to machine learning principles: training, validation, parameter tuning, feature engineering
- Supervised Learning:
- Learn about linear classifiers
- Explore hinge loss, margin boundaries, regularization
- Build an automatic review analyzer
- Nonlinear Classification and Regression:
- Study nonlinear classification, linear regression, collaborative filtering
- Lessons on stochastic gradient descent, over-fitting, generalization
- Create the first part of a digit recognition model
- Deep Learning and Neural Networks:
- Learn about neural network construction
- Explore recurrent neural networks and convolutional neural networks
- Complete the digit recognition model
- Unsupervised Learning:
- Focus on clustering, expectation-maximization (EM) algorithms, generative and mixture models
- Develop a collaborative filtering model using the EM algorithm
- Reinforcement Learning and NLP:
- Learn reinforcement learning concepts
- Introduction to natural language processing (NLP)
- Final project:
- Create a text-based game using NLP and reinforcement learning techniques
Regarding assessments, there are three projects to complete, along with a mid and final exam.
This course is part of the Statistics and Data Science MicroMasters Program.
Institution | Massachusetts Institute of Technology |
Provider | edX |
Instructors | Regina Barzilay, Tommi Jaakkola, and Karene Chu |
Level | Advanced |
Workload | 150–210 hours |
Enrollments | 276K |
Rating | 4.1/5.0 (118) |
Cost | Free audit |
Certificate | Paid |
Deep Learning Course with Emphasis on Computer Vision (University of Colorado Boulder)
From the University of Colorado Boulder, Deep Learning Applications for Computer Vision will guide you through the field of Computer Vision from a hands-on approach.
In this course, you’ll be learning about Computer Vision as a field of study and research. By the end of the course, you’ll be equipped with both deep learning techniques and machine learning tools needed to fulfill any computer vision task.
The prerequisites for this course are basic calculus (differentiation and integration), linear algebra, and proficiency in Python programming.
What you’ll learn in this course:
- Introduction to Computer Vision:
- Overview of Computer Vision goals: object detection, recognition, motion tracking
- Impact of Machine Learning and Deep Learning on Computer Vision
- Classic Computer Vision Tools and Techniques:
- Explanation of algorithmic steps and convolution operation
- Application of image filters
- Advantages and disadvantages of classic algorithmic solutions
- Challenges and Solutions in Classic Computer Vision:
- Challenges in object recognition
- Steps for achieving object recognition and image classification
- Neural Networks and Deep Learning:
- Differences between neural networks and classic computer vision pipelines
- Basic components of a neural network and training steps
- TensorFlow tutorial: building, training, and using a neural network for image classification
- Convolutional Neural Networks (CNNs):
- Structure and layers of CNNs
- Understanding parameters and hyperparameters
- Building, training, and using a deep neural network for image classification.
Each week comes with a graded assessment to complete, along with a final quiz, if you are paying for certification.
This course is actually a part of their Master of Science in Data Science degree, meaning you’ll learn what university students learn! It uses the book Computer Vision: A Modern Approach, which is helpful but not compulsory.
Institution | University of Colorado Boulder |
Provider | Coursera |
Instructor | Ioana Fleming |
Level | Intermediate |
Workload | 23 hours |
Enrollments | 6.4K |
Rating | 4.6/5.0 (60) |
Cost | Free audit |
Certificate | Paid |
Course Selection Methodology and Why You Should Trust Us
I built this guide following the now tried-and-tested methodology used in our previous BCGs (you can find them all here). It involves a three-step process:
First, let me introduce myself. I’m a content writer for Class Central. Class Central, a Tripadvisor for online education, has helped 60 million learners find their next course. We’ve been combing through online education for more than a decade to aggregate a catalog of 200,000 online courses and 200,000 reviews written by our users.
I (@elham), in collaboration with my friend and colleague @manoel, began by leveraging Class Central’s database to make a preliminary selection of deep learning courses. We looked at ratings, reviews, and course bookmarks to bubble up some of the most loved and popular deep learning courses.
But we didn’t stop there. Ratings and reviews rarely tell the whole story. So the next step was to bring our personal knowledge of online education into the fold.
Second, we used our experience as online learners to evaluate each preliminary pick.
Both of us come from computer science backgrounds and are prolific online learners, having completed about 45 MOOCs between us. Additionally, Manoel has an online bachelor’s in computer science, while I am currently completing my foundation in computer science. Hence, deep learning is something both of us have struggled with!
By carefully analyzing each course and bouncing ideas off each other, we made iterative improvements to the guide until we were both satisfied.
Third, during our research, we came across courses that we felt were well-made but weren’t well-known. Had we adopted a purely data-centric approach, we would be forced to leave those courses out of the guide just because they had fewer enrollments.
So instead, we decided to take a more holistic approach. By including a wide variety of courses on deep learning, we hope this guide will have something for everyone.
After going through this process — combining Class Central data, our experience as lifelong learners, and a lot of editing — we arrived at our final list. We intend to continue updating it in the future.
Pat revised the latest version of this article.