Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Independent Learning Dynamics for Stochastic Games - Where Game Theory Meets

International Mathematical Union via YouTube

Overview

Explore a 46-minute lecture on independent learning dynamics for stochastic games in multi-agent reinforcement learning. Delve into the challenges of applying classical reinforcement learning to multi-agent scenarios and discover recently proposed independent learning dynamics that guarantee convergence in stochastic games. Examine both zero-sum and single-controller identical-interest settings, while revisiting key concepts from game theory and reinforcement learning. Learn about the mathematical novelties in analyzing these dynamics, including differential inclusion approximation and Lyapunov functions. Gain insights into topics such as Nash equilibrium, fictitious play, and model-free individual Q-learning, all within the context of dynamic multi-agent environments.

Syllabus

Introduction
Welcome
Reinforcement Learning
Nash Equilibrium
fictitious play
multiagent learning
literature review
Motivation
Outline
Stochastic Game
Optimality
Top Game Theory
Mathematical Dynamics
Learning Rates
Convergence Analysis
Differential Inclusion Approximation
Lyapunov Function
Harriss Lyapunov Function
Zero Sum Case
Zero Potential Case
Convergence
Monotonicity
ModelFree
Individual Q Learning

Taught by

International Mathematical Union

Reviews

Start your review of Independent Learning Dynamics for Stochastic Games - Where Game Theory Meets

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.