Alemdar, N. M.Sirakaya, S.2015-07-282015-07-2820030898-1221http://hdl.handle.net/11693/13415This paper develops a general purpose numerical method to compute the feedback Nash equilibria in dynamic games. Players' feedback strategies are first approximated by neural networks which are then trained online by parallel genetic algorithms to search over all time-invariant equilibrium strategies synchronously. To eliminate the dependence of training on the initial conditions of the game, the players use the same stationary feedback policies (the same networks), to repeatedly play the game from a number of initial states at any generation. The fitness of a given feedback strategy is then computed as the sum of payoffs over all initial states. The evolutionary equilibrium of the game between the genetic algorithms is the feedback Nash equilibrium of the dynamic game. An oligopoly model with investment is approximated as a numerical example. (C) 2003 Elsevier Ltd. All rights reserved.EnglishFeedback Nash equilibriumParallel genetic algorithmsNeural networksGenetic neural networks to approximate feedback Nash equilibria in dynamic gamesArticle10.1016/S0898-1221(03)90186-6