A General Theory for Discrete-Time Mean-Field Games
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Explore a comprehensive lecture on the general theory of discrete-time mean-field games with discounted infinite-horizon cost. Delve into both perfect local state and decentralized imperfect state information structures, as presented by Tamer BaÅŸar from the University of Illinois at Urbana-Champaign. Examine the challenges in obtaining exact Nash equilibrium in dynamic games with decentralized information and finite players. Discover how the mean-field approach offers a solution to these difficulties. Investigate the existence of mean-field equilibrium in the infinite population limit for perfect local state information, and learn how this policy approximates Markov-Nash equilibrium for large player numbers. Explore partially observed mean-field games, including the technique of converting partially observed stochastic control problems to fully observed ones on belief space. Understand the establishment of Nash equilibria under mild technical conditions and how mean-field equilibrium policies form approximate Nash equilibria for games with many players. Gain insights into extensions to risk-sensitive mean-field games and potential applications to games with hierarchies.
Syllabus
Tamer BaÅŸar: "A General Theory for Discrete-Time Mean-Field Games"
Taught by
Institute for Pure & Applied Mathematics (IPAM)