报告3:Actor-Critic Reinforcement Learning Algorithms for Mean Field Games in Continuous time, State and Action Spaces
2024/10/21 来源: 编辑:


报告人:陈志平 (西安交通大学)


报告题目:Actor-Critic Reinforcement Learning Algorithms for Mean Field Games in Continuous time, State and Action Spaces


摘要: We investigate mean field games in continuous time, state and action spaces with an infinite number of agents, where each agent aims to maximize its expected cumulative reward. Using the technique of randomized policies, we show policy evaluation and policy gradient are equivalent to the martingale conditions of a process by focusing on a representative agent. Then combined with fictitious game, we propose online and offline actor-critic algorithms for solving continuous mean field games that update the value function and policy alternatively under the given population state distribution. We demonstrate through numerical experiments that our proposed algorithms can converge to the mean field equilibrium quickly and stably.