Speaker: Shiyun Lin (PKU)
Time: 16:00-17:00 p.m., September 1, 2023, GMT+8
Venue: Tecent Meeting ID: 723 1564 5542
Abstract:
A key question for online learning in games is whether the players eventually settle down to a stable profile from which no player has an incentive to deviate, i.e., whether the players' learning process converges to a Nash equilibrium. The answer is positive for specific types of games such as two-player zero-sum finite games, monotone, smooth and potential games, while for general games it would be negative. Therefore, a natural question is to characterize the sets of actions that are stable and attracting under a given learning process.
In this talk, we follow the recent work of Mertikopoulos et al. [2023] to introduce a stochastic approximation framework for analyzing the long-run behavior of learning in games. The framework incorporates a wide range of learning algorithms, including gradient-based methods, multiplicative weight algorithms for learning in finite games, optimistic and bandit variants of the above, etc. Moreover, a range of criteria for identifying classes of Nash equilibria and sets of action profiles that are attracting with high probability will also be discussed.
Source: School of Mathematical Sciences