Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Bayesian Networks 2 - Forward-Backward - Stanford CS221: AI

Stanford University via YouTube

Overview

Learn about advanced concepts in Bayesian networks and probabilistic inference in this Stanford University lecture from the CS221: AI course. Explore hidden Markov models, lattice representations, and particle filtering techniques. Dive into topics such as beam search, object tracking, and Gibbs sampling. Gain a deeper understanding of forward-backward algorithms and their applications in artificial intelligence through comprehensive explanations and demonstrations.

Syllabus

Introduction.
Review: Bayesian network.
Review: probabilistic inference.
Hidden Markov model inference.
Lattice representation.
Summary.
Hidden Markov models.
Review: beam search.
Step 1: propose.
weight.
Step 3: resample.
Application: object tracking.
Particle filtering demo.
Roadmap.
Gibbs sampling.

Taught by

Stanford Online

Reviews

Start your review of Bayesian Networks 2 - Forward-Backward - Stanford CS221: AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.