Browsing by Subject "Decision theory"
Now showing 1 - 17 of 17
- Results Per Page
- Sort Options
Item Open Access Age-based vs. stock level control policies for a perishable inventory system(2001) Tekin, E.; Gürler Ü.; Berk, E.In this study, we investigate the impact of modified lotsize-reorder control policy for perishables which bases replenishment decisions on both the inventory level and the remaining lifetimes of items in stock. We derive the expressions for the key operating characteristics of a lost sales perishable inventory model, operating under the proposed age-based policy, and examine the sensitivity of the optimal policy parameters with respect to various system parameters. We compare the performance of the suggested policy to that of the classical (Q,r) type policy through a numerical study over a wide range of system parameters. Our findings indicate that the age-based policy is superior to the stock level policy for slow moving perishable inventory systems with high service levels.Item Open Access The effects of structural characteristics of explanations on use of a DSS(Elsevier, 2006) Gönül, M. S.; Önkal D.; Lawrence, M.Research in the field of expert systems has shown that providing supporting explanations may influence effective use of system developed advice. However, despite many studies showing the less than optimal use made of DSS prepared advice, almost no research has been undertaken to study if the provision of explanations enhances the users' ability to wisely accept DSS advice. This study outlines an experiment to examine the effects of structural characteristics of explanations provided within a forecasting DSS context. In particular, the effects of explanation length (short vs. long) and the conveyed confidence level (weak vs. strong confidence) are examined. Strongly confident and long explanations are found to be more effective in participants' acceptance of interval forecasts. In addition, explanations with higher information value are more effective than those with low information value and thus are persuasive tools in the presentation of advice to users.Item Open Access Evaluating predictive performance of judgemental extrapolations from simulated currency series(Elsevier, 1999) Pollock, A. C.; Macaulay, A.; Önkal-Atay, D.; Wilkie-Thomson, M. E.Judgemental forecasting of exchange rates is critical for financial decision-making. Detailed investigations of the potential effects of time-series characteristics on judgemental currency forecasts demand the use of simulated series where the form of the signal and probability distribution of noise are known. The accuracy measures Mean Absolute Error (MAE) and Mean Squared Error (MSE) are frequently applied quantities in assessing judgemental predictive performance on actual exchange rate data. This paper illustrates that, in applying these measures to simulated series with Normally distributed noise, it may be desirable to use their expected values after standardising the noise variance. A method of calculating the expected values for the MAE and MSE is set out, and an application to financial experts' judgemental currency forecasts is presented.Item Open Access A game theoretical modeling and simulation framework for the integration of unmanned aircraft systems in to the national airspace(AIAA, 2016) Musavi, Negin; Tekelioğlu, K. B.; Yıldız, Yıldıray; Güneş, Kerem; Onural, DenizThe focus of this paper is to present a game theoretical modeling and simulation frame- work for the integration of Unmanned Aircraft Systems (UAS) into the National Airspace system (NAS). The problem of predicting the outcome of complex scenarios, where UAS and manned air vehicles co-exist, is the research problem of this work. The fundamental gap in the literature in terms of developing models for UAS integration into NAS is that the models of interaction between manned and unmanned vehicles are insufficient. These models are insufficient because a) they assume that human behavior is known a priori and b) they disregard human reaction and decision making process. The contribution of this paper is proposing a realistic modeling and simulation framework that will fill this gap in the literature. The foundations of the proposed modeling method is formed by game theory, which analyzes strategic decision making between intelligent agents, bounded rationality concept, which is based on the fact that humans cannot always make perfect decisions, and reinforcement learning, which is shown to be effective in human behavior in psychology literature. These concepts are used to develop a simulator which can be used to obtain the outcomes of scenarios consisting of UAS, manned vehicles, automation and their interactions. An analysis of the UAS integration is done with a specifically designed scenario for this paper. In the scenario, a UAS equipped with sense and avoid algorithm, moves along a predefined trajectory in a crowded airspace. Then the effect of various system parameters on the safety and performance of the overall system is investigated.Item Open Access Guessing with lies(IEEE, 2002-06-07) Arıkan, Erdal; Boztaş, S.A practical algorithm was obtained for directly generating an optimal guessing sequence for guessing under lies. An optimal guessing strategy was defined as one which minimizes the number of average number of guesses in determining the correct value of a random variable. The information-theoretic bounds on the average number of guesses for optimal strategies were also derived.Item Open Access Incorporating just-in-time into a decision support system environment(Elsevier BV, 1991) Oǧuz, Ceyda; Dinçer, CemalIn this paper, a Decision Support System is proposed for a Just-In-Time production system. The Decision Support System includes three components: database, model base, and interface. The database contains the predefined parameters together with the data generated for the considered Just-In-Time production system. In the model base, both deterministic and stochastic aspects of the system are considered. The deterministic system is examined by constructing a linear programming model whereas simulation is used as a tool for the stochastic system. Furthermore, a sensitivity analysis is performed on the Just-In-Time production system with the help of the Decision Support System environment for the unit load size changes under different demand patterns by using the alternative solutions obtained from the model base.Item Open Access Inventory control under substitutable demand: A stochastic game application(John Wiley & Sons, 2002) Avsşr, Z. M.; Baykal-Gürsoy, M.Substitutable product inventory problem is analyzed using the concepts of stochastic game theory. It is assumed that there are two substitutable products that are sold by different retailers and the demand for each product is random. Game theoretic nature of this problem is the result of substitution between products. Since retailers compete for the substitutable demand, ordering decision of each retailer depends on the ordering decision of the other retailer. Under the discounted payoff criterion, this problem is formulated as a two‐person nonzero‐sum stochastic game. In the case of linear ordering cost, it is shown that there exists a Nash equilibrium characterized by a pair of stationary base stock strategies for the infinite horizon problem. This is the unique Nash equilibrium within the class of stationary base stock strategies.Item Open Access Investments viewed as growth processes(Taylor & Francis, 1995) Doğrusöz, H.; Karabakal, N.For modeling investment decision situations, we present a mathematical basis that views the cash flow sequences as growth processes. We first emphasize the pedagogical value of the basic model by showing that all traditionally established measures of worth (profitability) as well as the compound interest formulas of financial mathematics can actually be derived from it by simple algebraic manipulations. Then, we argue that the traditional measures fail to recognize the particularities of certain decision situations and point out the need for developing tailor made measures for each specific problem. We demonstrate, using real life examples, our approach for developing new measures and, by incorporating decision variables, practical optimization models from this mathematical basis. © 1995 Taylor & Francis Group, LLC.Item Open Access Market-driven approach based on Markov decision theory for optimal use of resources in software development(Institution of Engineering and Technology, 2004) Noppen, J.; Aksit, M.; Nicola, V.; Tekinerdogan, B.Changes in requirements may have a severe impact on development processes. For example, if requirements change during the course of a software development activity, it may be necessary to reschedule development activities so that the new requirements can be addressed in a timely manner. Unfortunately, current software development methods do not provide explicit means to adapt development processes with respect to changes in requirements. The paper proposes a method based on Markov decision theory, which determines the estimated optimal development schedule with respect to probabilistic product demands and resource constraints. This method is supported by a tool and applied to an industrial case.Item Open Access Match-up scheduling under a machine breakdown(Elsevier, 1999) Aktürk, M. S.; Görgülü, E.When a machine breakdown forces a modified flow shop (MFS) out of the prescribed state, the proposed strategy reschedules part of the initial schedule to match up with the preschedule at some point. The objective is to create a new schedule that is consistent with the other production planning decisions like material flow, tooling and purchasing by utilizing the time critical decision making concept. We propose a new rescheduling strategy and a match-up point determination procedure through a feedback mechanism to increase both the schedule quality and stability. The proposed approach is compared with alternative reactive scheduling methods under different experimental settings. © 1999 Elsevier Science B.V. All rights reserved.Item Open Access A new lower bounding scheme for the total weighted tardiness problem(1998) Selim Akturk, M.; Bayram Yildirim, M.We propose a new dominance rule that provides a sufficient condition for local optimality for the 1∥Σw1Ti problem. We prove that if any sequence violates the proposed dominance rule, then switching the violating jobs either lowers the total weighted tardiness or leaves it unchanged. Therefore, it can be used in reducing the number of alternatives for finding the optimal solution in any exact approach. We introduce an algorithm based on the dominance rule, which is compared to a number of competing approaches for a set of randomly generated problems. We also test the impact of the dominance rule on different lower bounding schemes. Our computational results over 30,000 problems indicate that the amount of improvement is statistically significant for both upper and lower bounding schemes. © 1998 Elsevier Science Ltd, All rights reserved.Item Open Access Order of limits in reputations(Springer, 2016) Dalkıran, N. A.The fact that small departures from complete information might have large effects on the set of equilibrium payoffs draws interest in the adverse selection approach to study reputations in repeated games. It is well known that these large effects on the set of equilibrium payoffs rely on long-run players being arbitrarily patient. We study reputation games where a long-run player plays a fixed stage-game against an infinite sequence of short-run players under imperfect public monitoring. We show that in such games, introducing arbitrarily small incomplete information does not open the possibility of new equilibrium payoffs far from the complete information equilibrium payoff set. This holds true no matter how patient the long-run player is, as long as her discount factor is fixed. This result highlights the fact that the aforementioned large effects arise due to an order of limits argument, as anticipated. © 2016, Springer Science+Business Media New York.Item Open Access Qualitative test-cost sensitive classification(Elsevier BV, 2010) Cebe, M.; Gunduz Demir, C.This paper reports a new framework for test-cost sensitive classification. It introduces a new loss function definition, in which misclassification cost and cost of feature extraction are combined qualitatively and the loss is conditioned with current and estimated decisions as well as their consistency. This loss function definition is motivated with the following issues. First, for many applications, the relation between different types of costs can be expressed roughly and usually only in terms of ordinal relations, but not as a precise quantitative number. Second, the redundancy between features can be used to decrease the cost; it is possible not to consider a new feature if it is consistent with the existing ones. In this paper, we show the feasibility of the proposed framework for medical diagnosis problems. Our experiments demonstrate that this framework is efficient to significantly decrease feature extraction cost without decreasing accuracy. © 2010 Elsevier B.V. All rights reserved.Item Open Access Qualitative test-cost sensitive classification(2008) Cebe, MüminDecision making is a procedure for selecting the best action among several alternatives. In many real-world problems, decision has to be taken under the circumstances in which one has to pay to acquire information. In this thesis, we propose a new framework for test-cost sensitive classification that considers the misclassification cost together with the cost of feature extraction, which arises from the effort of acquiring features. This proposed framework introduces two new concepts to test-cost sensitive learning for better modeling the real-world problems: qualitativeness and consistency. First, this framework introduces the incorporation of qualitative costs into the problem formulation. This incorporation becomes important for many real world problems, from finance to medical diagnosis, since the relation between the misclassification cost and the cost of feature extraction could be expressed only roughly and typically in terms of ordinal relations for these problems. For example, in cancer diagnosis, it could be expressed that the cost of misdiagnosis is larger than the cost of a medical test. However, in the test-cost sensitive classification literature, the misclassification cost and the cost of feature extraction are combined quantitatively to obtain a single loss/utility value, which requires expressing the relation between these costs as a precise quantitative number. Second, the proposed framework considers the consistency between the current information and the information after feature extraction to decide which features to extract. For example, it does not extract a new feature if it brings no new information but just confirms the current one; in other words, if the new feature is totally consistent with the current information. By doing so, the proposed framework could significantly decrease the cost of feature extraction, and hence, the overall cost without decreasing the classification accuracy. Such consistency behavior has not been considered in the previous test-cost sensitive literature. We conduct our experiments on three medical data sets and the results demonstrate that the proposed framework significantly decreases the feature extraction cost without decreasing the classification accuracy.Item Open Access Satisfying due-dates in a job shop with sequence-dependent family set-ups(Taylor & Francis, 2003) Taner, M. R.; Hodgson, T. J.; King, R. E.; Thoney, K. A.This paper addresses job shop scheduling with sequence dependent family set-ups. Based on a simple, single-machine dynamic scheduling problem, state dependent scheduling rules for the single machine problem are developed and tested using Markov Decision Processes. Then, a generalized scheduling policy for the job shop problem is established based on a characterization of the optimal policy. The policy is combined with a ‘forecasting’ mechanism to utilize global shop floor information for local dispatching decisions. Computational results show that performance is significantly better than that of existing alternative policies.Item Open Access Test-cost sensitive classification based on conditioned loss functions(Springer, 2007-09) Cebe, Mümin; Gündüz-Demir, ÇiğdemWe report a novel approach for designing test-cost sensitive classifiers that consider the misclassification cost together with the cost of feature extraction utilizing the consistency behavior for the first time. In this approach, we propose to use a new Bayesian decision theoretical framework in which the loss is conditioned with the current decision and the expected decisions after additional features are extracted as well as the consistency among the current and expected decisions. This approach allows us to force the feature extraction for samples for which the current and expected decisions are inconsistent. On the other hand, it forces not to extract any features in the case of consistency, leading to less costly but equally accurate decisions. In this work, we apply this approach to a medical diagnosis problem and demonstrate that it reduces the overall feature extraction cost up to 47.61 percent without decreasing the accuracy. © Springer-Verlag Berlin Heidelberg 2007.Item Open Access Voting as validation in robot programming(Sage Publications Ltd., 1999-04) Utete, S. W.; Barshan, B.; Ayrulu, B.This paper investigates the use of voting as a conflict-resolution technique for data analysis in robot programming. Voting represents an information-abstraction technique. It is argued that in some cases a voting approach is inherent in the nature of the data being analyzed: where multiple, independent sources of information must be reconciled to give a group decision that reflects a single outcome rather than a consensus average. This study considers an example of target classification using sonar sensors. Physical models of reflections from target primitives that are typical of the indoor environment of a mobile robot are used. Dispersed sensors take decisions on target type, which must then be fused to give the single group classification of the presence or absence and type of a target. Dempster-Shafer evidential reasoning is used to assign a level of belief to each sensor decision. The decisions are then fused by two means. Using Dempster's rule of combination, conflicts are resolved through a group measure expressing dissonance in the sensor views. This evidential approach is contrasted with the resolution of sensor conflict through voting. It is demonstrated that abstraction of the level of belief through voting proves useful in resolving the straightforward conflicts that arise in the classification problem. Conflicts arise where the discriminant data value, an echo amplitude, is most sensitive to noise. Fusion helps to overcome this vulnerability: in Dempster-Shafer reasoning, through the modeling of nonparametric uncertainty and combination of belief values; and in voting, by emphasizing the majority view. The paper gives theoretical and experimental evidence for the use of voting for data abstraction and conflict resolution in areas such as classification, where a strong argument can be made for techniques that emphasize a single outcome rather than an estimated value. Methods for making the vote more strategic are also investigated. The paper addresses the reduction of dimension of sets of decision points or decision makers. Through a consideration of combination/order, queuing criteria for more strategic fusion are identified.