Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Bayesian Networks 3 - Maximum Likelihood - Stanford CS221: AI (Autumn 2019)

Stanford University via YouTube

Overview

Learn about Bayesian networks and probabilistic inference in this Stanford University lecture from the CS221: AI course. Explore the origins of parameters, delve into various learning tasks, and examine examples including v-structures, inverted-v structures, and Naive Bayes. Understand parameter sharing, Hidden Markov Models (HMMs), and the general case learning algorithm. Discover maximum likelihood estimation, regularization techniques like Laplace smoothing, and the concept of maximum marginal likelihood. Conclude with an introduction to the Expectation Maximization (EM) algorithm, gaining valuable insights into advanced artificial intelligence concepts.

Syllabus

Introduction.
Announcements.
Review: Bayesian network.
Review: probabilistic inference.
Where do parameters come from?.
Roadmap.
Learning task.
Example: one variable.
Example: v-structure.
Example: inverted-v structure.
Parameter sharing.
Example: Naive Bayes.
Example: HMMS.
General case: learning algorithm.
Maximum likelihood.
Scenario 2.
Regularization: Laplace smoothing.
Example: two variables.
Motivation.
Maximum marginal likelihood.
Expectation Maximization (EM).

Taught by

Stanford Online

Reviews

Start your review of Bayesian Networks 3 - Maximum Likelihood - Stanford CS221: AI (Autumn 2019)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.