Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Johns Hopkins University

Advanced Methods in Machine Learning Applications

Johns Hopkins University via Coursera

Overview

The course "Advanced Methods in Machine Learning Applications" delves into sophisticated machine learning techniques, offering learners an in-depth understanding of ensemble learning, regression analysis, unsupervised learning, and reinforcement learning. The course emphasizes practical application, teaching students how to apply advanced techniques to solve complex problems and optimize model performance. Learners will explore methods like bagging, boosting, and stacking, as well as advanced regression approaches and clustering algorithms. What sets this course apart is its focus on real-world challenges, providing hands-on experience with advanced machine learning tools and techniques. From exploring reinforcement learning for decision-making to applying apriori analysis for association rule mining, this course equips learners with the skills to handle increasingly complex datasets and tasks. By the end of the course, learners will be able to implement, optimize, and evaluate sophisticated machine learning models, making them well-prepared to address advanced challenges in both research and industry.

Syllabus

  • Course Introduction
    • This course provides a comprehensive exploration of advanced machine-learning techniques, including ensemble methods, regression analysis, and unsupervised learning algorithms. Students will gain hands-on experience with reinforcement learning and decision tree models while applying association rule mining on real datasets. Emphasis is placed on evaluating model performance and comparing various learning approaches. By the end, participants will be equipped with practical skills to tackle complex data-driven challenges.
  • Ensemble Learning
    • You can enhance supervised learning by using multiple weak classifiers that work on subsets of features with limited learning capability. By leveraging their sheer numbers and majority voting, ensemble classifiers consistently outperform and offer greater robustness than complex individual classifiers. Random Forest, considered one of the premier ensemble classifiers, relies on weak decision tree classifiers. Therefore, decision tree classifiers and their visualizations will be introduced in this module. Furthermore, you will see how employing numerous weak classifiers with reduced feature sets from the dataset can achieve combined voting performance that surpasses that of individual classifiers.
  • Regression
    • Certain problems you encounter will demand precise numerical predictions, such as forecasting the seasonal flu arrival rate or predicting next week's stock market index. For such scenarios, regression techniques prove invaluable. Throughout this module, you'll explore various types of regression, solve linear regression equations analytically, define cost functions, and understand situations where linear regression may falter. Additionally, you'll delve into coding quadratic and logistic regressions from scratch, utilizing polynomial features and sci-fi optimizers. Logistic regression, a widely used classification method, fits data to a logistic curve based on dataset features. You'll apply logistic regression to develop a predictive model for cancer recurrence using patient diagnostic data.
  • Unsupervised Learning
    • In this module, you will explore unsupervised learning, which serves as the counterpart to supervised learning. Unsupervised learning aims to construct the underlying probability distribution of a dataset based on its features as random variables, enabling it to identify outliers and centroids of densities. You'll begin by understanding distance and similarity metrics crucial for clustering algorithms. Popular algorithms like k-means, DBSCAN, hierarchical clustering, and EM will be introduced briefly. You'll also learn about metrics that evaluate cluster quality, alongside 3D visualizations and dendrograms. Using an artificial dataset similar to the one used in supervised learning, you will apply clustering techniques. Additionally, you'll witness clustering in action on the famous iris dataset, employing various algorithms. Throughout, you'll discover how the Elbow method aids in determining the optimal number of clusters.
  • Reinforcement Learning and Apriori Analysis
    • In this module, you will explore reinforcement learning, completing the trio of major learning strategies alongside supervised and unsupervised methods. Similar to how humans learn to navigate their environments, reinforcement learning operates in scenarios where ground truth is absent or impractical, relying instead on interactions with the environment. You'll discover how guidelines are learned through rewards and penalties to maximize benefits or minimize costs. Reinforcement learning is widely applied in teaching computers to play complex board games like Backgammon or chess—AlphaGo's triumph over the Go world champion exemplifies its capabilities in AI advancement. You'll delve into the reinforcement model, terminology, and typical problems such as tic-tac-toe and elevator control. Techniques for developing a mathematical model like Q-learning, based on states and actions, will be explored, culminating in hands-on implementation to master a chosen game.

Taught by

Erhan Guven

Reviews

Start your review of Advanced Methods in Machine Learning Applications

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.