- Dept. of Industrial Engineering - Master's degree

# Dept. of Industrial Engineering - Master's degree

## Permanent URI for this collection

## Browse

### Recent Submissions

Item Open Access Finding all equitably non-dominated points of multiobjective integer programming problems(Bilkent University, 2023-09) Ulutaş, SeyitShow more Equitable multiobjective programming (E-MOP) problems are multiobjective programming problems of a special type. In E-MOP, the decision-maker has equity concerns and hence has an equitable rational preference model. In line with this, our aim is to find all equitably non-dominated points (EN) of the multi-objective integer problems. There are different approaches to solving E-MOP problems. We use equitable aggregation functions and develop two different algorithms; one for equitable biobjective integer programming (E-BOIP) problems and one for equitable multiobjective integer programming (E-MOIP) problems with more than two objectives. In the first algorithm, we solve Pascoletti Serafini (PS) scalarization models iteratively while ensuring getting a weakly equitably non-dominated point in each iteration. In the second algorithm, we use cumulative ordered weighted average in the ExA algorithm of Özpeynirci and Köksalan [1] to find all extreme supported equitably non-dominated points (ESN) first. After finding all ESNs, we use them to define the regions that could contain EN. Then we use split algorithm and find all the remaining ENs. We also provide a split only version of the algorithm since the process of finding all ESNs could be time consuming. We compare two versions in multiobjective assignment and knapsack problem instances. Although the split only version is quicker, the original version of the algorithm is useful since it gives information about the weight space decomposition of ESNs. The weight space decomposition discussion is also provided.Show more Item Open Access Allocating vaccines under scarce supply(Bilkent University, 2023-08) Kılınç, OnurShow more We consider the vaccine allocation problem under scarce supply. We formulate the problem as a two stage stochastic programming model, considering the uncertain factors such as vaccine efficacy, disease spread dynamics and the amount of future supply. We discuss two variants of the model that could be used under different preferences. We demonstrate the usability of our formulations on two case study examples that are generated based on real-life data. The results demonstrate that incorporating the uncertainty in these factors into the decision making process would allow the policy makers to use more effective strategies with an adaptive nature. This is also indicated by the value of stochastic solution, which shows a significant enhancement in disease control gained by the stochastic programming solution compared to a plan based on expected figures.Show more Item Open Access Sales planning in closed-loop supply chains: recycling and remanufacturing options for early-generation returns(Bilkent University, 2023-08) Bayrak, BüşraShow more We consider a durable-good producer who optimizes its sales strategy for two successive generations of the same product and is able to remanufacture or recycle the first-generation product returns. The customer arrivals follow a multi-generation diffusion process that takes into account the word-of-mouth feedback spread within each customer population of successive product generations as well as the substitution effect among these product generations. We investigate theeconomic viability of deliberately slowing down the second-generation product diffusion to improve the first-generation remanufactured-item sales and the use of recycled content in the second-generation production in the long run. We provethat such a forward-looking approach is optimal if (i) the diffusion process is fast enough in the absence of any manipulation, (ii) the number of first-generation end-of-life returns and the recyclable-material amount from each such return are high enough, and (iii) the potential customer base of the first-generation product is sufficiently large. We also show that the same forward-looking approach is less likely to be optimal when the used items can only be acquired from the previous buyers of the first-generation product who return their used items to trade up to the second-generation product. We conjecture that our sales strategy has the potential not only to improve the profits in the long run but also to contribute to sustainable production and consumption by helping recover more used items via remanufacturing and recycling options.Show more Item Open Access Lagrangian relaxation for airport gate assignment problem(Bilkent University, 2023-07) Okur, Göksu EceShow more In this study we focus on the Airport Gate Assignment Problem that minimizes the total walking distance of passengers while ensuring that the number of aircraft assigned to apron is at its minimum. We utilize an alternative formulation for the problem compared to the ones in the literature and propose approaches based on Lagrangian Relaxation so as to obtain tight lower bounds. The method also harnesses the power of a good initial upper bound and provides good quality solutions. To the best of our knowledge, the current studies in the literature rely only on upper bounds or the linear relaxation lower bounds to assess the quality of heuristic solutions. We propose using the tighter Lagrangian Relaxation based bounds as a better reference to assess solution quality. Our computational experiments demonstrate that our Lagrangian relaxation based method returns strong lower bounds and good quality upper bounds that are comparable to the state-of-the art results from the literature.Show more Item Open Access A replenishment policy for perishable items with cold chain transportation and lead time reduction options(Bilkent University, 2023-07) Bayır, AtahanShow more Perishability relates to products that have limited shelf life and are prone to spoilage, decay, or becoming unsafe for use over time. Although perishability is a common aspect in several product categories such as fresh produce, pharmaceuticals, blood product and fashion industry, many inventory models assume that products have an infinite shelf life. In this work, we introduce a novel approach that integrates inventory replenishment and cold chain technology decisions to replenishment policy of inventories with the aim of better preserving the effective shelf life of products. We focus on a single product, single location, continuous review inventory model with positive lead time, where products have fixed lifetimes. Demand arrivals are assumed to follow a Poisson process. If there are two batches in stock, items in the old batch is disposed with a salvage value. We assume that a cold chain technology is available to maintain the items in a preservative environment, protecting them against temperature and moisture during lead time, thereby providing an extended shelf life when the items arrive in the inventory. In particular, we adopt a modified lot size/reorder point (Q,r) policy that allows for a cold chain technology. The model also allows for a scenario where lead time is reduced through expedited transportation, which directly increases shelf life of arriving batch while decreases the time spent during transportation. The objective is to minimize the expected cost per unit time over an infinite horizon the decision variables of order quantity, reorder level, and cold chain technology level. A numerical study is provided to demonstrate the model performance, its sensitivity to system parameters and to compare all the policies we present.Show more Item Open Access Algorithms for sparsity constrained principal component analysis(2023-07) Aktaş, Fatih SelimShow more The classical Principal Component Analysis problem consists of finding a linear transform that reduces the dimensionality of the original dataset while keeping most of the variation. Extra sparsity constraint sets most of the coefficients to zero which makes interpretation of the linear transform easier. We present two approaches to the sparsity constrained Principal Component Analysis. Firstly, we develop computationally cheap heuristics that can be deployed in very high-dimensional problems. Our heuristics are justified with linear algebra approximations and theoretical guarantees. Furthermore, we strengthen our algorithms by deploying the necessary conditions for the optimization model. Secondly, we use a non-convex log-sum penalty in the semidefinite space. We show a connection to the cardinality function and develop an algorithm, PCA Sparsified, to solve the problem locally via solving a sequence of convex optimization problems. We analyze the theoretical properties of this algorithm and comment on the numerical implementation. Moreover, we derive a pre-processing method that can be used with previous approaches. Finally, our findings from the numerical experiments we conducted show that our greedy algorithms scale to high dimensional problems easily while being highly competitive in many problems with state-of-art algorithms and even beating them uniformly in some cases. Additionally, we illustrate the effectiveness of PCA Sparsified on small dimensional problems in terms of variance explained. Although it is computationally very demanding, it consistently outperforms local and greedy approaches.Show more Item Open Access Systemic risk measures based on value-at-risk(Bilkent University, 2023-07) Al-Ali, WissamShow more This thesis addresses the problem of computing systemic set-valued risk measures. The proposed method incorporates the clearing mechanism of the Eisenberg-Noe model, used as an aggregation function, with the value-at-risk, used as the underlying risk measure. The sample-average approximation (SAA) of the corresponding set-valued systemic risk measure can be calculated by solving a vector optimization problem. For this purpose, we propose a variation of the so-called grid algorithm in which grid points are evaluated by solving certain scalar mixed-integer programming problems, namely, the Pascoletti Serafini and norm-minimizing scalarizations. At the initialization step, we solve weighted sum scalarizations to establish a compact region for the algorithm to work on. We prove the convergence of the SAA optimal values of the scalarization problems to their respective true values. More-over, we prove the convergence of the approximated set-valued risk measure to the true set-valued risk measure in both the Wijsman and Hausdorff senses. In order to demonstrate the applicability of our findings, we construct a financial network based on the Bollob´as preferential attachment model. In addition, we model the economic disruptions using identically distributed random variables with a Pareto distribution. We conduct a comprehensive sensitivity analysis to investigate the effect of the number of scenarios, correlation coefficient, and Bollob´as network parameters on the systemic risk measure. The results highlight the minimal influence of the number of scenarios and correlation coefficient on the risk measure, demonstrating its stability and robustness, while shedding light on the profound significance of Bollob´as network parameters in determining the network dynamics and the overall level of systemic risk.Show more Item Open Access Point cloud registration using quantile assignment(Bilkent University, 2023-07) Oğuz, EcenurShow more Point cloud registration is a fundamental problem in computer vision with a wide range of applications. The problem mainly consists of three parts: feature estimation, correspondence matching and transformation estimation. We introduced the Quan-tile Assignment problem and proposed a solution algorithm to be used in a point cloud registration framework for establishing the correspondence set between the source and the target point clouds. We analyzed different common feature descriptors and transformation estimation methods to combine with our Quantile Assignment algorithm. The performance of these approaches together with our algorithm are tested with controlled experiments on a dataset we constructed using well-known 3D models. We detected the most suitable methods to combine with our approach and proposed a new end-to-end pairwise point cloud registration framework. Finally, we tested our framework on both indoor and outdoor benchmark datasets and compared our results with state-of-the-art point cloud registration methods in the literature.Show more Item Open Access Approximation algorithms for difference of convex (DC) programming problems(Bilkent University, 2023-07) Pirani, Fahaar MansoorShow more This thesis is concerned with Difference of Convex (DC) programming problems and approximation algorithms to solve them. There is an existing exact algorithm that solves DC programming problems if one component of the DC function is polyhedral convex [1]. Motivated by this, first, we propose an algorithm (Algorithm 1) for generating an ϵ-polyhedral underestimator of a convex function g. The algorithm starts with a polyhedral underestimator of g and the epigraph of the current underestimator is intersected with a single halfspace in each iteration to obtain a better approximation. We prove the correctness and establish the convergence rate of Algorithm 1. We also propose a modified variant (Algorithm 2) in which multiple halfspaces are used to update the epigraph of current approximation in each iteration. In addition to its correctness, we prove that Algorithm 2 terminates after finitely many iterations. We show that after obtaining an ϵ-polyhedral underestimator of the first component of a DC function, the algorithm from [1] can be applied to compute an ϵ-solution of the DC programming problem. We also propose an algorithm (Algorithm 3) for solving DC programming problems directly. In each iteration, Algorithm 3 updates the polyhedral underestimator of g locally while searching for an ϵ-solution to the DC problem directly. We prove that the algorithm stops after finitely many iterations and it returns an ϵ-solution to the DC programming problem. Moreover, the sequence {xk}k≥0 outputted by Algorithm 3 converges to a global minimizer of the DC problem when ϵ is set to zero. The computational results, obtained using some test examples from [2], show comparable performance of Algorithms 1, 2 and 3 with respect to two DC programming algorithms from the literature.Show more Item Open Access Diffusion control in closed-loop supply chains: successive product generations with remanufacturing potential(Bilkent University, 2023-06) Güray, BüşraShow more We consider a durable-good producer who aims to jointly optimize its sales decisions for two successive product generations that are remanufacturable. The customer arrivals are governed by the generalized Norton-Bass diffusion process over a finite selling horizon. The remanufactured-item sales are constrained by the available end-of-use returns in each time period for each product generation. We investigate whether the producer can profit from partially satisfying the second-generation product demand to smooth out the second-generation diffusion curve and increase the total remanufactured-item sales in the long run. We show that the partial-fulfillment policy is optimal for fast-clockspeed products if (i) the profit margin ratio of the remanufactured item to the new item is large enough for the second-generation product, (ii) the profit margin ratio of the first-generation new item to the second-generation new item is high enough, (iii) the fraction of customers who are willing to buy the remanufactured item is only moderately large for each product generation, and (iv) the number of customers who are initially attracted by the first-generation product and willing to buy the remanufactured item is not too large. We also characterize the environmentally critical time period beyond which the optimal initiation of partial demand fulfillment leads to no improvement in the total remanufacturing volume for the second-generation product.Show more Item Open Access Risk-averse optimization of wind-based electricity generation with battery storage(Bilkent University, 2022-12) Eser, MerveShow more As the global installed capacity of wind power increases, various solutions have been developed to accommodate the intermittent nature of wind. Investing in battery storage reduces power fluctuations, improves the reliability of delivering power on demand, and decreases wind curtailment. In the literature, power producers are generally modelled as risk-neutral decision makers, and the focus has been on expected profit maximization. For many privately-held small independent power producers, it is more important to capture their risk-aversion through specialized risk measurements driven by the owners’ specific risk preferences, even though the expected value-maximization objective is very desirable for large corporations with diversified investors. We consider a risk-averse, privately-held, small Independent Power Producer interested in investing in a battery storage system and jointly operating the wind farm and storage system with a trans-mission line connected to the market. We formulate the problem as a Markov decision process (MDP) to find optimal investment, generation, and operational storage decisions. Using dynamic coherent risk measures, we incorporate risk-aversion into our formulation. By choosing the risk measure as first-order mean semi-deviation, we obtain optimal threshold-based policy structure as well as optimal storage investment capacity. We perform a sensitivity analysis on optimal storage capacity with respect to the risk-aversion degree and transmission line limitations.Show more Item Open Access Production line calibration with data analysis(Bilkent University, 2022-09) Taş, İsmail BurakShow more Product weights can be statistically related to controllable and uncontrollable factors of the production processes. Uncontrollable factors may be correlated with controllable factors. We fitted a response surface approximator of product weights and found sub-optimal controllable factors’ values that minimize product weight. Furthermore, we found that the uncertainty of uncontrollable variables and the correlation among them may affect the result of product weight minimization. The company may implement these findings to reduce the cost of production. Also, we formulated a fully Bayesian experimental design problem to minimize product weight tolerance limits and built hierarchical models. Posterior distributions of the hierarchical models’ parameters can be simulated by a Gibbs sampler. However, we conclude that the effectiveness and convergence of the Gibbs sampler may not be robust to candidate design settings while searching over the design space to solve the experimental design problem.Show more Item Open Access Employee turnover probability prediction(Bilkent University, 2022-09) Barın, Hüsameddin DenizShow more Employee turnover prediction is crucial for the companies in the sense that the precautionary action by the employers can be made in advance. A turnover data provided by a company was examined throughout the thesis. Firstly, the missing data were imputed. Then a hierarchical model aiming to explain the attrition heterogeneity among the employees and preventing separation was ﬁtted to the data set. Finally, the results of the implementation were analyzed along with the benchmark models. Based on the results, the proposed hierarchical model had a higher performance on the target metric and the heterogeneity across the units was inferred through the hierarchical model which outperformed the benchmark models.Show more Item Open Access A chance constrained approach to optimal sizing of renewable energy systems with pumped hydro energy storage(Bilkent University, 2022-08) Kalkan, NazlıShow more Burning fossil fuels is responsible for a large portion of the greenhouse gases released into the atmosphere. In addition to their negative impacts on the environment, fossil fuels are limited, which makes the integration of renewable energy sources into the grid inevitable. However, the intermittent nature of renewable energy sources makes it challenging to regulate energy output, resulting in low system flexibility. Adoption of an energy storage system, such as pumped hydro energy storage (PHES) and batteries, is necessary to fully utilize and integrate a larger proportion of variable renewable energy sources into the grid. On the other hand, in investment planning problems, satisfying the demand for certainty for even infrequently occurring events can lead to considerable cost increases. In this study, we propose a chance constrained two-stage stochastic program for designing a hybrid renewable energy system where the intermittent solar energy output is supported by a closed-loop PHES system. The aim of this study is to minimize the total investment cost while meeting the energy demand at a predetermined service level. For our computational study, we generate scenarios for solar radiation by using an Auto-Regressive Integrated Moving Average (ARIMA) based algorithm. In order to exactly solve our large scale problem, we utilize a Benders based branch and cut decomposition algorithm. We analize the efficiency of our proposed solution method by comparing the CPU times provided by the proposed algorithm and CPLEX. The findings indicate that the proposed algorithm solves the problem faster than CPLEX.Show more Item Open Access Product line design under multinomial logit model(Bilkent University, 2022-07) Ergül, ÇağlaShow more Product line design has significant effects on the level of profitability and the market share of a firm. Firms attempt to make their product lines more diverse in order to satisfy the increasingly heterogenous demand of their customers. However, the production and operational costs increase as the product line becomes more diverse. Hence, designing a product line that balances the potential increase in profit due to the high variety of the products against the costs is a crucial decision. We study the capacitated product line design problem of a firm wishing to introduce a new product line. Given a set of attributes, the firm decides on how many products to offer and which attributes to include in each product. Customer choice is modeled by a multinomial logit (MNL) model and the average utilities are assumed to be linear in product attributes. We study the scenarios in which the sales prices of the products are exogenous and endogenous, with a greater emphasis on the former. For the first scenario, we study the case in which the firm has two attributes in consideration and different capacity levels. We characterize the necessary inequalities to choose one assortment over another for each capacity level. For the 1-capacitated problem, we show that the optimal product can be characterized with two inequalities. We later extend this result to the case where the firm has a finite number of attributes in consideration. We also elaborate on how changes in the parameters of the model affect the choice of the optimal product for the 1-capacitated problem. We propose two rules to find assortments that are never optimal for the case where the firm has a capacity greater than one. With these rules, we reduce the number of assortments that needs to be checked for optimality. Furthermore, we introduce a procedure to find the optimal assortment in the uncapacitated problem. For the endogenous price scenario, we assume the firm has a finite set of attributes. We find the closed form solutions for the optimal prices of the products. For the 1-capacitated problem, we show that it is optimal to include all attributes for which the additional average utility of including the attribute is larger than the additional cost. Lastly, we extend this result to the case in which the firm can offer more than one product: The firm always fills up the capacity and the products having the largest utility markup are offered.Show more Item Open Access Sparsity penalized mean-variance portfolio selection: computation and analysis(Bilkent University, 2022-07) Şen, BuseShow more The problem of selecting the best portfolio of assets, so-called mean-variance portfolio (MVP) selection, has become a prominent mathematical problem in the asset management framework. We consider the problem of MVP selection regu-larized with ℓ0-penalty term to control the sparsity of the portfolio. We analyze the structure of local and global minimizers, show the existence of global mini-mizers and develop a necessary condition for the global minimizers in the form of a componentwise lower bound for the global minimizers. We use the results in the design of a Branch-and-Bound algorithm. Extensive computational results with real-world data as well as comparisons with an off-the-shelf and state-of-the-art mixed-integer quadratic programming (MIQP) solver are reported. The behavior of the portfolio’s risk against the expected return and penalty parameter is ex-amined by numerical experiments. Finally, we present the accumulated returns over time according to the solutions yielded by the Branch-and-Bound and Lasso for the instances that the MIQP solver fails to find.Show more Item Open Access Topics in optimization via Deep Neural Networks(Bilkent University, 2022-06) Ekmekcioğlu, ÖmerShow more We present two studies in the intersection of deep learning and optimization, Deep Portfolio Optimization, and Subset Based Error Recovery. Along with the emergence of deep models in finance, the portfolio optimization trend had shifted towards data-driven models from the classical model-based approaches. However, the deep portfolio models generally suffer from the non-stationary nature of the data and the results obtained are not always very stable. To address this issue, we propose to use Graph Neural Networks (GNN) which allows us to incorporate graphical knowledge to increase the stability of the models in order to improve the results obtained in comparison to the state-of-the-art recurrent architectures. Furthermore, we analyze the algorithmic risk-return trade-off for the deep port-folio optimization models to give insights on risk for the fully data-driven models. We also propose a data denoising method using Extreme Learning Machine (ELM) structure. Furthermore, we show that the method is equivalent to a robust two-layer ELM that implicitly benefits from the proposed denoising algorithm. Current robust ELM methods in the literature involve well-studied L1, L2 regularization techniques as well as the usage of the robust loss functions such as Huber Loss. We extend the recent analysis on the Robust Regression literature to be effectively used in more general, non-linear settings and to be compatible with any ML algorithm such as Neural Networks (NN). These methods are useful under the scenario where the observations suffer from the effect of heavy noise. Tests for denoising and regularized ELM methods are conducted on both synthetic and real data. Our method performs better than its competitors for most of the scenarios, and successfully eliminates most of the noise.Show more Item Open Access A new selective location routing problem: educational services for refugees(Bilkent University, 2022-07) Demir, Şebnem ManolyaShow more Syrian War has forced 5.5 million Syrians to seek for asylum. Turkey hosts 3.7 million Syrian refugees, 47% of whom are children. Even though the schooling rate of Syrian refugee children has steadily increased, currently, there are still more than 400 thousand children distanced from education. Turkey’s initial plans were not accounting for a refugee crisis going on for a decade. In this study, we first identify the availability and accessibility challenges posed by the country’s existing plans of integrating refugees to the national education system. Then, to reinforce schooling access for the refugee children in Turkey, we develop a planning strategy that is aligned with the local regulations. To improve school enrollment rates among Syrian refugee children without burdening the existing infrastructure of the host country, we formulate Capacitated Maximal Covering Problem with Heterogenity Constraints (CMCP-HC) and two extensions: Cooperative CMCP-HC (CCMCP-HC) to improve the current schooling access in Turkey and Modular CCMCP-HC to provide a guide for early planning in the case of a future crisis. As lack of school accessibility has been identified as one of the significant challenges hampering the school attendance rates, we incorporate routing decisions. To ease children’s transportation to schools, we propose a new Selective Location Routing Problem (SLRP) that corresponds to a novel formulation, where the location decisions impact the selective nature of the routing problem. For cases with further scarcity of the resources, we introduce Attendance-based SLRP (A-SLRP) and represent children’s attendance behaviors as a gradual decay function of distance. For the solution of these two complex problems, we offer a 2-Stage Solution Approach that yields optimal solutions for A-SLRP. Results of our computational analysis with the real-life data of the most densely refugee populated Turkish province illustrate that CCMCPHC and Modular CMCP-HC improve schooling enrollment rates and capacity utilizations compared to status quo. Moreover, SLRP and A-SLRP enable approximately twice as many children’s continuation to education, compared to the benchmarking formulation. Overall, this study analyzes Turkey’s experience and lessons learned over a decade to provide a road-map based on operations research methodologies, for potential similar situations in the future.Show more Item Open Access Finding robustly fair solutions in resource allocation(Bilkent University, 2022-07) Elver, İzzet EgemenShow more In this study, we consider resource allocation problems where the decisions affect multiple beneficiaries and the decision maker aims to ensure that the effect is distributed to the beneficiaries in an equitable manner. We specifically consider stochastic environments where there is uncertainty in the system and propose a robust programming approach that aims at maximizing system efficiency (measured by the total expected benefit) while guaranteeing an equitable benefit allocation even under the worst scenario. Acknowledging the fact that the robust solution may lead to high efficiency loss and may be over-conservative, we adopt a parametric approach that allows controlling the level of conservatism and present the decision maker alternative solutions that reveal the trade-off between the total expected benefit and the degree of conservatism when incorporating fairness. We obtain tractable formulations, leveraging the results we provide on the properties of highly unfair allocations. We demonstrate the usability of our approach on project selection and shelter allocation applications.Show more Item Open Access Bayesian in-service failure rate models(Bilkent University, 2022-08) Alankaya, TolunayShow more Predicting the number of appliance failures during service after sales is crucial for manufacturers to detect production errors and plan spare part inventories. We provide a two-phased Bayesian model that predicts the number of refrigerators that fail after sales. Thus the study focuses on both sales forecasting and failure detection. The two-phased Bayesian model is trained by the datasets provided by a leading durable home appliances company. The accuracy results show that one-level models are inferior to multi-level models when the data are sparse. We conclude that hierarchical Bayesian models are preferable since they can naturally capture the heterogeneity across all blends of attributes.Show more