Show simple item record

dc.contributor.authorAlemdar, N. M.en_US
dc.contributor.authorSirakaya, S.en_US
dc.date.accessioned2015-07-28T12:06:15Z
dc.date.available2015-07-28T12:06:15Z
dc.date.issued2003en_US
dc.identifier.issn0898-1221
dc.identifier.urihttp://hdl.handle.net/11693/13415
dc.description.abstractThis paper develops a general purpose numerical method to compute the feedback Nash equilibria in dynamic games. Players' feedback strategies are first approximated by neural networks which are then trained online by parallel genetic algorithms to search over all time-invariant equilibrium strategies synchronously. To eliminate the dependence of training on the initial conditions of the game, the players use the same stationary feedback policies (the same networks), to repeatedly play the game from a number of initial states at any generation. The fitness of a given feedback strategy is then computed as the sum of payoffs over all initial states. The evolutionary equilibrium of the game between the genetic algorithms is the feedback Nash equilibrium of the dynamic game. An oligopoly model with investment is approximated as a numerical example. (C) 2003 Elsevier Ltd. All rights reserved.en_US
dc.language.isoEnglishen_US
dc.source.titleComputers and Mathematics with Applicationsen_US
dc.relation.isversionofhttp://dx.doi.org/10.1016/S0898-1221(03)90186-6en_US
dc.subjectFeedback Nash equilibriumen_US
dc.subjectParallel genetic algorithmsen_US
dc.subjectNeural networksen_US
dc.titleGenetic neural networks to approximate feedback Nash equilibria in dynamic gamesen_US
dc.typeArticleen_US
dc.departmentDepartment of Economicsen_US
dc.citation.spage1493en_US
dc.citation.epage1509en_US
dc.citation.volumeNumber46en_US
dc.citation.issueNumber11en_US
dc.identifier.doi10.1016/S0898-1221(03)90186-6en_US
dc.publisherPergamon Pressen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record