Browsing by Subject "Optimal stopping"
Now showing 1 - 14 of 14
Results Per Page
Sort Options
Item Open Access Asymptotically optimal Bayesian sequential change detection and identification rules(2013) Dayanik, S.; Powell, W. B.; Yamazaki, K.We study the joint problem of sequential change detection and multiple hypothesis testing. Suppose that the common distribution of a sequence of i.i.d. random variables changes suddenly at some unobservable time to one of finitely many distinct alternatives, and one needs to both detect and identify the change at the earliest possible time. We propose computationally efficient sequential decision rules that are asymptotically either Bayes-optimal or optimal in a Bayesian fixed-error-probability formulation, as the unit detection delay cost or the misdiagnosis and false alarm probabilities go to zero, respectively. Numerical examples are provided to verify the asymptotic optimality and the speed of convergence. © 2012 Springer Science+Business Media, LLC.Item Open Access Compound Poisson disorder problem with uniformly distributed disorder time(2019-07) Ürü, ÇağınSuppose that arrival rate and jump distribution of a compound Poisson process change suddenly at an unknown and unobservable time. The problem of detecting the change (disorder) as soon as it occurs is known as compound Poisson disorder. In practice, an unfavorable regime shift may require immediate action, and a quickest detection rule can allow the decision maker to react to the change and take the necessary countermeasures in a timely manner. Dayanık and Sezer [Compound Poisson disorder problem, Math. Oper. Res., vol. 31, no. 4, pp. 649-672, 2006] completely solve the compound Poisson disorder problem assuming a change-point with an exponential prior distribution. Although the exponential prior is convenient when solving the problem, it has aws when expressing reality due to the memoryless property. Besides, as an informative prior, it fails to represent the case when the decision maker has no prior information on the change-point. Considering these defects, we assume a uniformly distributed change-point instead in our study. Unlike the exponential prior, the uniform prior has a memory and can be used when the decision maker does not have a strong belief on the change-point. We reformulate the quickest detection problem as a nite-horizon optimal stopping problem for a piecewisedeterministic and Markovian sufficient statistic. With Monte Carlo simulation and Chebyshev interpolation, we calculate the value function numerically via successive approximations. Studying the sample-paths of the sufficient statistic, we describe an explicit quickest detection rule and provide numerical examples for our solution method.Item Open Access Compound Poisson disorder problem with uniformly distributed disorder time(Bernoulli Society for Mathematical Statistics and Probability, 2023-08) Uru, C.; Dayanık, Savaş; Sezer, Semih O.Suppose that the arrival rate and the jump distribution of a compound Poisson process change suddenly at an unknown and unobservable time. We want to detect the change as quickly as possible to take counteractions, e.g., to assure top quality of products in a production system, or to stop credit card fraud in a banking system. If we have no prior information about future disorder time, then we typically assume that the disorder is equally likely to happen any time – or has uniform distribution – over a long but finite time horizon. We solve this so-called compound Poisson disorder problem for the practically important case of unknown, unobserved, but uniformly distributed disorder time. The solution hinges on the complete separation of information flow from the hard time horizon constraint, by describing the former with an autonomous time-homogeneous one-dimensional Markov process in terms of which the detection problem translates into a finite horizon optimal stopping problem. For any given finite horizon, the solution is two-dimensional. For cases where the horizon is large and one is unwilling to set a fixed value for it, we give a one-dimensional approximation. Also, we discuss an extension where the disorder may not happen on the given interval with a positive probability. In this extended model, if no detection decision is made by the end of the horizon, then a second level hypothesis testing problem is solved to determine the local parameters of the observed process.Item Open Access Compound poisson disorder problems with nonlinear detection delay penalty cost functions(2010) Dayanik, S.The quickest detection of the unknown and unobservable disorder time, when the arrival rate and mark distribution of a compound Poisson process suddenly changes, is formulated in a Bayesian setting, where the detection delay penalty is a general smooth function of the detection delay time. Under suitable conditions, the problem is shown to be equivalent to the optimal stopping of a finite-dimensional piecewise-deterministic strongly Markov sufficient statistic. The solution of the optimal stopping problem is described in detail for the compound Poisson disorder problem with polynomial detection delay penalty function of arbitrary but fixed degree. The results are illustrated for the case of the quadratic detection delay penalty function. © Taylor & Francis Group, LLC.Item Open Access Compound Poisson disorder with general prior and misspecified Wiener disorder problem(2024-07) Şahin, DenizFor a system modeled with a compound Poisson or a Wiener process, let us assume that the underlying model parameters change at an unknown and unobservable time. For a compound Poisson process, these are arrival rate and mark distribution while for a Wiener process, it is the drift parameter. Suppose the decision maker knows the pre- and post-disorder process parameters, as well as the prior density of the disorder time. In this case, finding a stopping rule that optimizes a Bayesian penalty function is called the compound Poisson and Wiener disorder problem, respectively. For the compound Poisson problem, we consider a general prior distribution where the decision maker has more general knowledge about the disorder time than exponential and uniform priors which were addressed in the previous studies. For the Wiener problem, we revisit the asset selling problem with an exponential prior, where the decision maker specifies problem parameters incorrectly. In both cases, the original problems reduce to optimal stopping problems. We use time discretization and successive approximation methods for the first case and Markov chain approximation and Monte Carlo simulations for the second case. We provide the quickest detection rules and discuss various numerical examples.Item Open Access Detection and identification of changes of hidden Markov chains: Asymptotic theory(Springer Science and Business Media B.V., 2021-10-06) Dayanık, Savaş; Yamazaki, KazutoshiThis paper revisits a unified framework of sequential change-point detection and hypothesis testing modeled using hidden Markov chains and develops its asymptotic theory. Given a sequence of observations whose distributions are dependent on a hidden Markov chain, the objective is to quickly detect critical events, modeled by the first time the Markov chain leaves a specific set of states, and to accurately identify the class of states that the Markov chain enters. We propose computationally tractable sequential detection and identification strategies and obtain sufficient conditions for the asymptotic optimality in two Bayesian formulations. Numerical examples are provided to confirm the asymptotic optimality. © 2021, The Author(s).Item Open Access Discrete-time pricing and optimal exercise of American perpetual warrants in the geometric random walk model(2013) Vanderbei, R. J.; Pınar, M. Ç.; Bozkaya, E. B.An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problem as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.Item Open Access End-of-life inventory management problem: new results and insights(2020-09) Özyörük, EminWe consider a manufacturer who controls the inventory of spare parts in the end-of-life phase and takes one of three actions at each period: (1) place an order, (2) use existing inventory, or (3) stop holding inventory and use an outside/alternative source. Two examples of this source are discounts for a new generation product and delegating operations. The novelty of our study is allowing multiple orders while using strategies pertinent to the end-of-life phase. Demand is described by a non-homogeneous Poisson process, and the decision to stop holding inventory is described by a stopping time. After formulating this problem as an optimal stopping problem with additional decisions and presenting its dynamic programming algorithm, we use martingale theory to facilitate the calculation of the value function. Comparison with benchmark models and sensitivity analysis show the value of our approach and generate several managerial insights. Next, in a more special environment with a single order and a deterministic time to stop holding inventory, we present structural properties and analytical insights. The results include the optimality of (s, S) policy, and the relation between S and the time to stop holding inventory. Finally, we tackle the issue of selecting the intensity function by allowing it to be a stochastic process. The demand process can be constructed by using a Poisson random measure and an intensity process being measurable with respect to the Skorokhod topology. We show the necessary properties of this process including Laplace functional, strong Markov property and its compensated random measure. In case the intensity process is unobservable, we construct a non-linear filter process and reduce the problem to one with complete observation.Item Open Access Model misspecification in discrete time bayesian online change detection(Springer, 2023-02-17) Dayanik, Semih; Sezer, Semih OWe revisit the classical formulation of the discrete time Bayesian online change detection problem in which the common distribution of an observed sequence of random variables changes at an unknown point in time. The objective is to detect the change with a stopping time of the observations and minimize a given Bayes risk. When the change time has a zero-modified geometric prior distribution, the first crossing time of the odds-ratio process over a threshold is known to be an optimal solution. In the current paper, we consider a modeler who misspecifies some of the elements of this formulation. Because of this misspecification, the modeler computes a wrong stopping threshold and follows an incorrect odds-ratio process in implementation. To find her actual Bayes risk, which is different from the value function evaluated with the wrong choices, one needs to compute the expected costs accumulated by the true odds-ratio process until modeler’s odd-ratio process crosses this wrong boundary. In the paper, we carry out these computations in the extended state space of both processes, and we illustrate these computations on examples. In those examples, we construct tolerance regions for the parameters to be estimated by the modeler. For a given choice by the modeler, the tolerance region is the set of true values for which her relative loss is less than or equal to a predetermined level.Item Open Access Multisource Bayesian sequential binary hypothesis testing problem(2012) Dayanik, S.; Sezer, S. O.We consider the problem of testing two simple hypotheses about unknown local characteristics of several independent Brownian motions and compound Poisson processes. All of the processes may be observed simultaneously as long as desired before a final choice between hypotheses is made. The objective is to find a decision rule that identifies the correct hypothesis and strikes the optimal balance between the expected costs of sampling and choosing the wrong hypothesis. Previous work on Bayesian sequential hypothesis testing in continuous time provides a solution when the characteristics of these processes are tested separately. However, the decision of an observer can improve greatly if multiple information sources are available both in the form of continuously changing signals (Brownian motions) and marked count data (compound Poisson processes). In this paper, we combine and extend those previous efforts by considering the problem in its multisource setting. We identify a Bayes optimal rule by solving an optimal stopping problem for the likelihood-ratio process. Here, the likelihood-ratio process is a jump-diffusion, and the solution of the optimal stopping problem admits a two-sided stopping region. Therefore, instead of using the variational arguments (and smooth-fit principles) directly, we solve the problem by patching the solutions of a sequence of optimal stopping problems for the pure diffusion part of the likelihood-ratio process. We also provide a numerical algorithm and illustrate it on several examples.Item Open Access Optimal stopping problems for asset management(2012) Dayanık, S.; Egami, M.An asset manager invests the savings of some investors in a portfolio of defaultable bonds. The manager pays the investors coupons at a constant rate and receives a management fee proportional to the value of the portfolio. He/she also has the right to walk out of the contract at any time with the net terminal value of the portfolio after payment of the investors' initial funds, and is not responsible for any deficit. To control the principal losses, investors may buy from the manager a limited protection which terminates the agreement as soon as the value of the portfolio drops below a predetermined threshold. We assume that the value of the portfolio is a jump diffusion process and find an optimal termination rule of the manager with and without protection. We also derive the indifference price of a limited protection. We illustrate the solution method on a numerical example. The motivation comes from the collateralized debt obligations.Item Open Access Pricing perpetual American-type strangle option for merton's jump diffusion process(2014) Onat, AyşegülA stock price Xt evolves according to jump diffusion process with certain parameters. An asset manager who holds a strangle option on that stock, wants to maximize his/her expected payoff over the infinite time horizon. We derive an optimal exercise rule for asset manager when the underlying stock is dividend paying and non-dividend paying. We conclude that optimal stopping strategy changes according to stock’s dividend rate. We also illustrate the solution on numerical examples.Item Open Access Risk-averse control of undiscounted transient Markov models(Society for Industrial and Applied Mathematics, 2014) Çavuş, Ö.; Ruszczyński, A.We use Markov risk measures to formulate a risk-averse version of the undiscounted total cost problem for a transient controlled Markov process. Using the new concept of a multikernel, we derive conditions for a system to be risk transient, that is, to have finite risk over an infinite time horizon. We derive risk-averse dynamic programming equations satisfied by the optimal policy and we describe methods for solving these equations. We illustrate the results on an optimal stopping problem and an organ transplantation problem.Item Open Access Wiener disorder problem with observations at fixed discrete time epochs(Institute for Operations Research and the Management Sciences (I N F O R M S), 2010) Dayanik, S.Suppose that a Wiener process gains a known drift rate at some unobservable disorder time with some zero-modified exponential distribution. The process is observed only at known fixed discrete time epochs, which may not always be spaced in equal distances. The problem is to detect the disorder time as quickly as possible by means of an alarm that depends only on the observations of Wiener process at those discrete time epochs. We show that Bayes optimal alarm times, which minimize expected total cost of frequent false alarms and detection delay time, always exist. Optimal alarms may in general sound between observation times and when the space-time process of the odds that disorder happened in the past hits a set with a nontrivial boundary. The optimal stopping boundary is piecewise-continuous and explodes as time approaches from left to each observation time. On each observation interval, if the boundary is not strictly increasing everywhere, then it irst decreases and then increases. It is strictly monotone wherever it does not vanish. Its decreasing portion always coincides with some explicit function. We develop numerical algorithms to calculate nearly optimal detection algorithms and their Bayes risks, and we illustrate their use on numerical examples. The solution of Wiener disorder problem with discretely spaced observation times will help reduce risks and costs associated with disease outbreak and production quality control, where the observations are often collected and/or inspected periodically.