Browsing by Subject "Parallel genetic algorithms"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Genetic neural networks to approximate feedback Nash equilibria in dynamic games(Pergamon Press, 2003) Alemdar, N. M.; Sirakaya, S.This paper develops a general purpose numerical method to compute the feedback Nash equilibria in dynamic games. Players' feedback strategies are first approximated by neural networks which are then trained online by parallel genetic algorithms to search over all time-invariant equilibrium strategies synchronously. To eliminate the dependence of training on the initial conditions of the game, the players use the same stationary feedback policies (the same networks), to repeatedly play the game from a number of initial states at any generation. The fitness of a given feedback strategy is then computed as the sum of payoffs over all initial states. The evolutionary equilibrium of the game between the genetic algorithms is the feedback Nash equilibrium of the dynamic game. An oligopoly model with investment is approximated as a numerical example. (C) 2003 Elsevier Ltd. All rights reserved.Item Open Access Learning the optimum as a Nash equilibrium(Elsevier BV, 2000) Özyıldırım, S.; Alemdar, N. M.This paper shows the computational benefits of a game theoretic approach to optimization of high dimensional control problems. A dynamic noncooperative game framework is adopted to partition the control space and to search the optimum as the equilibrium of a k-person dynamic game played by k-parallel genetic algorithms. When there are multiple inputs, we delegate control authority over a set of control variables exclusively to one player so that k artificially intelligent players explore and communicate to learn the global optimum as the Nash equilibrium. In the case of a single input, each player's decision authority becomes active on exclusive sets of dates so that k GAs construct the optimal control trajectory as the equilibrium of evolving best-to-date responses. Sample problems are provided to demonstrate the gains in computational speed and accuracy. © 2000 Elsevier Science B.V.Item Open Access Multi-population parallel genetic algorithm using a new genetic representation for the euclidean traveling salesman problem(İstanbul Technical University, 2005) Kapanoğlu, M.; Koç, İ. O.; Kara, İ.; Aktürk, Mehmet SelimThis paper introduces a multi-population genetic algorithm (M-PPGA) using a new genetic representation, the kth-nearest neighbor representation, for Euclidean Traveling Salesman Problems. The proposed M-PPGA runs M greedy genetic algorithms on M separate populations, each with two new operators, intersection repairing and cheapest insert. The M-PPGA finds optimal or near optimal solutions by using a novel communication operator among individually converged populations. The algorithm generates high quality building blocks within each population; then, combines these blocks to build the optimal or near optimal solutions by means of the communication operator. The proposed M-PPGA outperforms the GAs that we know of as competitive with respect to running times and solution quality, over the considered test problems including the Turkey81.Item Open Access On-line computation of Stackelberg equilibria with synchronous parallel genetic algorithms(Elsevier BV, 2003) Alemdar, N. M.; Sirakaya, S.This paper develops a method to compute the Stackelberg equilibria in sequential games. We construct a normal form game which is interactively played by an artificially intelligent leader, GAL, and a follower, GAF. The leader is a genetic algorithm breeding a population of potential actions to better anticipate the follower's reaction. The follower is also a genetic algorithm training on-line a suitable neural network to evolve a population of rules to respond to any move in the leader's action space. When GAs repeatedly play this game updating each other synchronously, populations converge to the Stackelberg equilibrium of the sequential game. We provide numerical examples attesting to the efficiency of the algorithm. © 2002 Elsevier Science B.V. All rights reserved.