Structural results for average-cost inventory models with partially observed Markov-modulated demand
Please cite this item using this persistent URLhttp://hdl.handle.net/11693/46944
We consider a discrete-time in nite-horizon inventory system with full backlogging, deterministic replenishment lead time, and Markov-modulated demand. The actual state of demand can only be imperfectly estimated based on past demand data. We model the inventory replenishment problem as a Markov decision process with an uncountable state space consisting of both the inventory position and the most recent belief about the actual state of demand. When the demand state evolves according to an ergodic Markov chain, using the vanishing discount method along with a coupling argument, we prove the existence of an optimal average cost that is independent of the initial system state. With this result, we establish the average-cost optimality of a belief-dependent base-stock policy. We then discretize the belief space into a regular grid. The average cost under our discretization converges to the optimal average cost as the number of grid points grows large. Finally, we conduct numerical experiments to evaluate the use of a myopic belief-dependent base-stock policy as a heuristic. On a test bed of 108 instances, the average cost under the myopic policy deviates by no more than a few percent from the best lower bound on the optimal average cost obtained from our discretization.