Browsing by Subject "Mean-field games"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Nash equilibria for exchangeable team-against-team games, their mean-field limit, and the role of common randomness(Society for Industrial and Applied Mathematics, 2024-05-16) Sanjari, Sina; Saldı, Naci; Yüksel, SerdarWe study stochastic exchangeable games among a finite number of teams consisting of a large but finite number of decision makers as well as their mean-field limit with infinite number of decision makers in each team. For this class of games within static and dynamic settings, we introduce sets of randomized policies under various decentralized information structures with pri- vately independent or common randomness for decision makers within each team. (i) For a general class of exchangeable stochastic games with a finite number of decision makers, we first establish the existence of a Nash equilibrium under randomized policies (with common randomness) within each team that are exchangeable (but not necessarily symmetric, i.e., identical) among decision makers within each team. (ii) As the number of decision makers within each team goes to infinity (that is, for the mean-field limit game among teams), we show that a Nash equilibrium exists under randomized policies within each team that are independently randomized and symmetric among decision makers within each team (that is, there is no common randomness). (iii) Finally, we establish that a Nash equilibrium for a class of mean-field games among teams under independently randomized symmetric policies constitutes an approximate Nash equilibrium for the corresponding prelimit (exchangeable) game among teams with finite but large numbers of decision makers. (iv) We thus establish a rigor- ous connection between agent-based-modeling and team-against-team games, via the representative agents defining the game played in equilibrium, and we furthermore show that common randomness is not necessary for large team-against-team games, unlike the case with small-sized ones.Item Open Access Q-learning in regularized mean-field games(Birkhaeuser Science, 2022-05-23) Anahtarci, B.; Kariksiz, C.D.; Saldi, NaciIn this paper, we introduce a regularized mean-field game and study learning of this game under an infinite-horizon discounted reward function. Regularization is introduced by adding a strongly concave regularization function to the one-stage reward function in the classical mean-field game model. We establish a value iteration based learning algorithm to this regularized mean-field game using fitted Q-learning. The regularization term in general makes reinforcement learning algorithm more robust to the system components. Moreover, it enables us to establish error analysis of the learning algorithm without imposing restrictive convexity assumptions on the system components, which are needed in the absence of a regularization term.