Browsing by Subject "fictitious play"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Do players learn how to learn? : evidence from conctant sum games with varying number of actions(2009) Saraçgil, İhsan ErmanThis thesis investigates the learning behaviour of individuals in strategic environments that have different complexity levels. A new experiment is conducted in which ascending or descending series of constant sum games are played by subjects and the experimental data including both stated beliefs and actual plays are used to estimate which learning model explains the subjects’ behaviour best within and across these games. Taking learning rules that model the opponent as a learning agent and heterogeneity of the population into consideration, the estimation results support that people switch learning rules across games and use different models in different games. This game-dependency is confirmed by both action, beliefs and the joint estimations. Although their likelihoods vary from game to game, best response to uniform beliefs and reinforcement learning are the most commonly used learning rules in the four games considered in the experiment, while fictitious play and iterations on that are rare instances observed only in estimation by stated beliefs. Despite the change across games, there is no significant link between complexity of the game and the cognitive hierarchy of learning models. Belief statements and best response behaviour also differ across games as we observepeople making smoother guesses in large action games and more dispersed beliefs statements in small action games. Inconsistency between actions and stated beliefs is stronger in large action games. The evidence strongly supports that learning and belief formation are both game-dependent.Item Open Access Fictitious play in zero-sum stochastic games(Society for Industrial and Applied Mathematics, 2022) Sayin, Muhammed O.; Parise, Francesca; Ozdaglar, AsumanWe present a novel variant of fictitious play dynamics combining classical fictitiousplay with Q-learning for stochastic games and analyze its convergence properties in two-player zero-sum stochastic games. Our dynamics involves players forming beliefs on the opponent strategyand their own continuation payoff (Q-function), and playing a greedy best response by using theestimated continuation payoffs. Players update their beliefs from observations of opponent actions.A key property of the learning dynamics is that update of the beliefs onQ-functions occurs at aslower timescale than update of the beliefs on strategies. We show that in both the model-based andmodel-free cases (without knowledge of player payoff functions and state transition probabilities),the beliefs on strategies converge to a stationary mixed Nash equilibrium of the zero-sum stochasticgame.