Browsing by Subject "Convex vector optimization"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Open Access Algorithms to solve unbounded convex vector optimization problems(Society for Industrial and Applied Mathematics Publications, 2023-10-12) Wagner, A.; Ulus, Firdevs; Rudloff, B.; Kováčová, G.; Hey, N.This paper is concerned with solution algorithms for general convex vector optimization problems (CVOPs). So far, solution concepts and approximation algorithms for solving CVOPs exist only for bounded problems [\c C. Ararat, F. Ulus, and M. Umer, J. Optim. Theory Appl., 194 (2022), pp. 681-712], [D. Dörfler, A. Löhne, C. Schneider, and B. Weißing, Optim. Methods Softw., 37 (2022), pp. 1006-1026], [A. Löhne, B. Rudloff, and F. Ulus, J. Global Optim., 60 (2014), pp. 713-736]. They provide a polyhedral inner and outer approximation of the upper image that have a Hausdorff distance of at most ε. However, it is well known (see [F. Ulus, J. Global Optim., 72 (2018), pp. 731-742]), that for some unbounded problems such polyhedral approximations do not exist. In this paper, we will propose a generalized solution concept, called an (ε,δ)-solution, that allows one to also consider unbounded CVOPs. It is based on additionally bounding the recession cones of the inner and outer polyhedral approximations of the upper image in a meaningful way. An algorithm is proposed that computes such δ-outer and δ-inner approximations of the recession cone of the upper image. In combination with the results of [A. Löhne, B. Rudloff, and F. Ulus, J. Global Optim., 60 (2014), pp. 713-736] this provides a primal and a dual algorithm that allow one to compute (ε,δ)-solutions of (potentially unbounded) CVOPs. Numerical examples are provided.Item Open Access Certainty equivalent and utility indifference pricing for incomplete preferences via convex vector optimization(Springer Science and Business Media Deutschland GmbH, 2020) Rudloff, B.; Ulus, FirdevsFor incomplete preference relations that are represented by multiple priors and/or multiple—possibly multivariate—utility functions, we define a certainty equivalent as well as the utility indifference price bounds as set-valued functions of the claim. Furthermore, we motivate and introduce the notion of a weak and a strong certainty equivalent. We will show that our definitions contain as special cases some definitions found in the literature so far on complete or special incomplete preferences. We prove monotonicity and convexity properties of utility buy and sell prices that hold in total analogy to the properties of the scalar indifference prices for complete preferences. We show how the (weak and strong) set-valued certainty equivalent as well as the indifference price bounds can be computed or approximated by solving convex vector optimization problems. Numerical examples and their economic interpretations are given for the univariate as well as for the multivariate case.Item Open Access Geometric duality results and approximation algorithms for convex vector optimization problems(Society for Industrial and Applied Mathematics Publications, 2023-01-27) Ararat, Çağın; Tekgül, S.; Ulus, FirdevsWe study geometric duality for convex vector optimization problems. For a primal problem with a q-dimensional objective space, we formulate a dual problem with a (q+1)-dimensional objective space. Consequently, different from an existing approach, the geometric dual problem does not depend on a fixed direction parameter, and the resulting dual image is a convex cone. We prove a one-to-one correspondence between certain faces of the primal and dual images. In addition, we show that a polyhedral approximation for one image gives rise to a polyhedral approximation for the other. Based on this, we propose a geometric dual algorithm which solves the primal and dual problems simultaneously and is free of direction-biasedness. We also modify an existing direction-free primal algorithm in such a way that it solves the dual problem as well. We test the performance of the algorithms for randomly generated problem instances by using the so-called primal error and hypervolume indicator as performance measures. © 2023 Society for Industrial and Applied Mathematics.Item Open Access A new geometric duality and approximation algorithms for convex vector optimization problems(2021-07) Tekgül, SimayIn the literature, there are different algorithms for solving convex vector optimization problems, in the sense of approximating the set of all minimal points in the objective space. One of the main approaches is to provide outer approximations to this set and improve the approximation iteratively by solving scalarization models. In addition to the outer approximation algorithms, which are referred to as primal algorithms, there are also geometric dual algorithms which work on a dual space and approximate the set of all maximal elements of a geometric dual problem. In most of the primal and dual algorithms in the literature, the scalarization methods, the solution concepts and the design of the algorithms depend on a fixed direction vector from the ordering cone. Recently, a primal algorithm that does not depend on a direction parameter is proposed in (Ararat et al., 2021). Using the primal algorithm in (Ararat et al., 2021), we construct a new geometric dual algorithm based on a new geometric duality relation between the primal and dual images. This relation is shown by providing an inclusion reversing one-to-one correspondence between weakly minimal proper faces of the primal image and maximal proper faces of the dual image. For a primal problem with a q-dimensional objective space, we present a dual problem with a q+1-dimensional objective space. Consequently, the resulting dual image is a convex cone. The primal algorithm in (Ararat et al., 2021) is modified to give a finite epsilon-solution to the dual problem as well as a finite weak epsilon-solution to the primal problem. The constructed geometric dual algorithm gives a finite epsilon-solution to the dual problem; moreover, it gives a finite weak delta-solution to the primal problem, where delta is determined by epsilon and the structure of the underlying ordering cone. We implement primal and dual algorithms using MATLAB and test the performance of the algorithms for randomly generated convex vector optimization problems. The tests are performed with different dimensions of the objective and decision spaces, different ordering cones, different ell-p-norms, and different stopping criteria. It is observed that the dual algorithm gives a fraction of the allowed approximation error, epsilon, resulting in a longer runtime with epsilon stopping criterion. When runtime is used as stopping criterion, the dual algorithm returns a closer approximation for higher dimensions of the objective space.Item Open Access Norm minimization-based convex vector optimization algorithms(2022-08) Umer, MuhammadThis thesis is concerned with convex vector optimization problems (CVOP). We propose an outer approximation algorithm (Algorithm 1) for solving CVOPs. In each iteration, the algorithm solves a norm-minimizing scalarization for a reference point in the objective space. The idea is inspired by some Benson-type algorithms in the literature that are based on Pascoletti-Serafini scalarization. Since this scalarization needs a direction parameter, the efficiency of these algorithms depend on the selection of the direction parameter. In contrast, our algorithm is free of direction biasedness since it solves a scalarization that is based on minimizing a norm. However, the structure of such algorithms, including ours, has some built-in limitation which makes it difficult to perform convergence analysis. To overcome this, we modify the algorithm by introducing a suitable compact subset of the upper image. After the modification, we have Algorithm 2 in which norm-minimizing scalarizations are solved for points in the compact set. To the best of our knowledge, Algorithm 2 is the first algorithm for CVOPs, which is proven to be finite. Finally, we propose a third algorithm for the purposes of con-vergence analysis (Algorithm 3), where a modified norm-minimizing scalarization is solved in each iteration. This scalarization includes an additional constraint which ensures that the algorithm deals with only a compact subset of the upper image from the beginning. Besides having the finiteness result, Algorithm 3 is the first CVOP algorithm with an estimate of a convergence rate. The experimental results, obtained using some benchmark test problems, show comparable performance of our algorithms with respect to an existing CVOP algorithm based on Pascoletti-Serafini scalarization.Item Open Access Outer approximation algorithms for convex vector optimization problems(2021-07) Keskin, Irem NurThere are different outer approximation algorithms in the literature that are de-signed to solve convex vector optimization problems in the sense that they approx-imate the upper image using polyhedral sets. At each iteration, these algorithms solve vertex enumeration and scalarization problems. The vertex enumeration problem is used to find the vertex representation of the current outer approxima-tion. The scalarization problem is used in order to generate a weakly C-minimal element of the upper image as well as a supporting halfspace that supports the upper image at that point. In this study, we present a general framework of such algorithm in which the Pascoletti-Serafini scalarization is used. This scalarization finds the minimum ‘distance’ from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. The reference point and the direction vector are the parameters for this scalarization. The motivation of this study is to come up with efficient methods to select the parameters of the Pascoletti-Serrafini scalarization and analyze the effects of these parameter selections on the performance of the algorithm. We first propose three rules to choose the direction parameter at each iteration. We conduct a preliminary computational study to observe the effects of these rules under various, rather simple rules for vertex selection. Depending on the results of the preliminary analysis, we fix a direction selection rule to continue with. Moreover, we observe that vertex selection also has a significant impact on the performance, as expected. Then, we propose additional vertex selection rules, which are slightly more complicated than the previous ones, and are designed with the motivation that they generate a well-distributed points on the boundary of the upper image. Different from the existing vertex selection rules from the literature, they do not require to solve additional single-objective optimization problems. Using some test problems, we conduct a computational study where three dif-ferent measures set as the stopping criteria: the approximation error, the runtime, and the cardinality of the solution set. We compare the proposed variants and some algorithms from the literature in terms of these measures that are used as the stopping criteria as well as an additional proximity measure, hypervolume gap. We observe that the proposed variants have satisfactory results especially in terms of runtime. When the approximation error is chosen as the stopping criteria, the proposed variants require less CPU time compared to the algorithms from the literature. Under fixed runtime, they return better proximity measures in general. Under fixed cardinality, the algorithms from the literature yield bet-ter proximity measures, but they require significantly more CPU time than the proposed variants.Item Open Access Outer approximation algorithms for convex vector optimization problems(Taylor and Francis Ltd., 2023-02-09) Keskin, İrem Nur; Ulus, FirdevsIn this study, we present a general framework of outer approximation algorithms to solve convex vector optimization problems, in which the Pascoletti-Serafini (PS) scalarization is solved iteratively. This scalarization finds the minimum ‘distance’ from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. We propose efficient methods to select the parameters (the reference point and direction vector) of the PS scalarization and analyse the effects of these on the overall performance of the algorithm. Different from the existing vertex selection rules from the literature, the proposed methods do not require solving additional single-objective optimization problems. Using some test problems, we conduct an extensive computational study where three different measures are set as the stopping criteria: the approximation error, the runtime, and the cardinality of the solution set. We observe that the proposed variants have satisfactory results, especially in terms of runtime compared to the existing variants from the literature. © 2023 Informa UK Limited, trading as Taylor & Francis Group.