Browsing by Subject "Stochastic growth model"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Approximating the stochastic growth model with neural networks trained by genetic algorithms(2006) Kıykaç, CihanIn this thesis study, we present a direct numerical solution methodology for the onesector nonlinear stochastic growth model. Rather than parameterizing or dealing with the Euler equation, like other methods do, our method directly parameterizes the policy function with a neural network trained by a genetic algorithm. Since genetic algorithms are derivative free and the policy function is directly parameterized, there is no need for taking derivatives. While other methods are bounded by the existence of required derivatives in higher dimensional state spaces, our method preserves its functionality. As genetic algorithms are global search algorithms, our method’s results are robust whatever the search space is. In addition to the stochastic growth model, to observe the performance of the method under real conditions, we tested the method by adding capital adjustment costs to the model. Under all parameter configurations, the method performs quite well.Item Open Access Feedback approximation of the stochastic growth model by genetic neural networks(Springer New York LLC, 2006) Sirakaya, S.; Turnovsky, S.; Alemdar, M. N.A direct numerical optimization method is developed to approximate the one-sector stochastic growth model. The feedback investment policy is parameterized as a neural network and trained by a genetic algorithm to maximize the utility functional over the space of time-invariant investment policies. To eliminate the dependence of training on the initial conditions, at any generation, the same stationary investment policy (the same network) is used to repeatedly solve the problem from differing initial conditions. The fitness of a given policy rule is then computed as the sum of payoffs over all initial conditions. The algorithm performs quite well under a wide set of parameters. Given the general purpose nature of the method, the flexibility of neural network parametrization and the global nature of the genetic algorithm search, it can be easily extended to tackle problems with higher dimensional nonlinearities, state spaces and/or discontinuities. © Springer Science+Business Media, Inc. 2006.