Overview
In this course, you'll learn about different types of supervised learning and how to use them to solve real-world problems.
Syllabus
- Introduction to Supervised Learning
- Before diving into the many algorithms of machine learning, it is important to take a step back and understand the big picture associated with the entire field.
- Linear Regression
- Linear regression is one of the most fundamental algorithms in machine learning. In this lesson, learn how linear regression works!
- Perceptron Algorithm
- The perceptron algorithm is an algorithm for classifying data. It is the building block of neural networks.
- Decision Trees
- Decision trees are a structure for decision-making where each decision leads to a set of consequences or additional decisions.
- Naive Bayes
- Naive Bayesian Algorithms are powerful tools for creating classifiers for incoming labeled data. Specifically Naive Bayes is frequently used with text data and classification problems.
- Support Vector Machines
- Support vector machines are a common method used for classification problems. They have been proven effective using what is known as the 'kernel' trick!
- Ensemble Methods
- Bagging and boosting are two common ensemble methods for combining simple algorithms to make more advanced models that work better than the simple algorithms would on their own.
- Model Evaluation Metrics
- Learn the main metrics to evaluate models, such as accuracy, precision, recall, and more!
- Training and Tuning
- Learn the main types of errors that can occur during training, and several methods to deal with them and optimize your machine learning models.
- Finding Donors Project
- You've covered a wide variety of methods for performing supervised learning -- now it's time to put those into action!
Taught by
Luis Serrano and Josh Bernhard_color