Avcı, Harun2018-05-182018-05-182018-052018-052018-05-17http://hdl.handle.net/11693/46944Cataloged from PDF version of article.Thesis (M.S.): Bilkent University, Department of Industrial Engineering, İhsan Doğramacı Bilkent University, 2018.Includes bibliographical references (leaves 44-48).We consider a discrete-time in nite-horizon inventory system with full backlogging, deterministic replenishment lead time, and Markov-modulated demand. The actual state of demand can only be imperfectly estimated based on past demand data. We model the inventory replenishment problem as a Markov decision process with an uncountable state space consisting of both the inventory position and the most recent belief about the actual state of demand. When the demand state evolves according to an ergodic Markov chain, using the vanishing discount method along with a coupling argument, we prove the existence of an optimal average cost that is independent of the initial system state. With this result, we establish the average-cost optimality of a belief-dependent base-stock policy. We then discretize the belief space into a regular grid. The average cost under our discretization converges to the optimal average cost as the number of grid points grows large. Finally, we conduct numerical experiments to evaluate the use of a myopic belief-dependent base-stock policy as a heuristic. On a test bed of 108 instances, the average cost under the myopic policy deviates by no more than a few percent from the best lower bound on the optimal average cost obtained from our discretization.viii, 48 leaves : charts ; 30 cmEnglishinfo:eu-repo/semantics/openAccessInventory ControlMarkov-Modulated DemandPartial ObservationsLong-Run Average CostBase-Stock PolicyStructural results for average-cost inventory models with partially observed Markov-modulated demandSaklı Markov süreciyle değişen talep dağılımlı ortalama maliyet envanter modellerinde yapısal sonuçlarThesisB158355