Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Markov Chain Variance Estimation - A Stochastic Approximation Approach

Centre for Networked Intelligence, IISc via YouTube

Overview

Watch a technical lecture where Dr. Shubhada Agrawal, a postdoctoral researcher from CMU, presents groundbreaking research on estimating asymptotic variance in Markov chains using stochastic approximation. Explore the development of the first recursive estimator that achieves optimal O(1/n) convergence rate while requiring minimal computation and storage. Learn how this innovative approach improves upon existing methods by eliminating the need for historical sample storage and prior run-length knowledge. Discover applications in average reward reinforcement learning, including variance-constrained policy evaluation for safety-critical systems. Delve into extensions covering vector-valued functions, stationary variance estimation, and large state space implementations. Gain insights from Dr. Agrawal's expertise in applied probability and sequential decision-making, developed through her academic journey from IIT Delhi to her current research at CMU.

Syllabus

Time: 5:00 – PM

Taught by

Centre for Networked Intelligence, IISc

Reviews

Start your review of Markov Chain Variance Estimation - A Stochastic Approximation Approach

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.