Browsing by Subject "Scalarization"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Geometric duality results and approximation algorithms for convex vector optimization problems(Society for Industrial and Applied Mathematics Publications, 2023-01-27) Ararat, Çağın; Tekgül, S.; Ulus, FirdevsWe study geometric duality for convex vector optimization problems. For a primal problem with a q-dimensional objective space, we formulate a dual problem with a (q+1)-dimensional objective space. Consequently, different from an existing approach, the geometric dual problem does not depend on a fixed direction parameter, and the resulting dual image is a convex cone. We prove a one-to-one correspondence between certain faces of the primal and dual images. In addition, we show that a polyhedral approximation for one image gives rise to a polyhedral approximation for the other. Based on this, we propose a geometric dual algorithm which solves the primal and dual problems simultaneously and is free of direction-biasedness. We also modify an existing direction-free primal algorithm in such a way that it solves the dual problem as well. We test the performance of the algorithms for randomly generated problem instances by using the so-called primal error and hypervolume indicator as performance measures. © 2023 Society for Industrial and Applied Mathematics.Item Open Access A new geometric duality and approximation algorithms for convex vector optimization problems(2021-07) Tekgül, SimayIn the literature, there are different algorithms for solving convex vector optimization problems, in the sense of approximating the set of all minimal points in the objective space. One of the main approaches is to provide outer approximations to this set and improve the approximation iteratively by solving scalarization models. In addition to the outer approximation algorithms, which are referred to as primal algorithms, there are also geometric dual algorithms which work on a dual space and approximate the set of all maximal elements of a geometric dual problem. In most of the primal and dual algorithms in the literature, the scalarization methods, the solution concepts and the design of the algorithms depend on a fixed direction vector from the ordering cone. Recently, a primal algorithm that does not depend on a direction parameter is proposed in (Ararat et al., 2021). Using the primal algorithm in (Ararat et al., 2021), we construct a new geometric dual algorithm based on a new geometric duality relation between the primal and dual images. This relation is shown by providing an inclusion reversing one-to-one correspondence between weakly minimal proper faces of the primal image and maximal proper faces of the dual image. For a primal problem with a q-dimensional objective space, we present a dual problem with a q+1-dimensional objective space. Consequently, the resulting dual image is a convex cone. The primal algorithm in (Ararat et al., 2021) is modified to give a finite epsilon-solution to the dual problem as well as a finite weak epsilon-solution to the primal problem. The constructed geometric dual algorithm gives a finite epsilon-solution to the dual problem; moreover, it gives a finite weak delta-solution to the primal problem, where delta is determined by epsilon and the structure of the underlying ordering cone. We implement primal and dual algorithms using MATLAB and test the performance of the algorithms for randomly generated convex vector optimization problems. The tests are performed with different dimensions of the objective and decision spaces, different ordering cones, different ell-p-norms, and different stopping criteria. It is observed that the dual algorithm gives a fraction of the allowed approximation error, epsilon, resulting in a longer runtime with epsilon stopping criterion. When runtime is used as stopping criterion, the dual algorithm returns a closer approximation for higher dimensions of the objective space.Item Open Access Norm minimization-based convex vector optimization algorithms(2022-08) Umer, MuhammadThis thesis is concerned with convex vector optimization problems (CVOP). We propose an outer approximation algorithm (Algorithm 1) for solving CVOPs. In each iteration, the algorithm solves a norm-minimizing scalarization for a reference point in the objective space. The idea is inspired by some Benson-type algorithms in the literature that are based on Pascoletti-Serafini scalarization. Since this scalarization needs a direction parameter, the efficiency of these algorithms depend on the selection of the direction parameter. In contrast, our algorithm is free of direction biasedness since it solves a scalarization that is based on minimizing a norm. However, the structure of such algorithms, including ours, has some built-in limitation which makes it difficult to perform convergence analysis. To overcome this, we modify the algorithm by introducing a suitable compact subset of the upper image. After the modification, we have Algorithm 2 in which norm-minimizing scalarizations are solved for points in the compact set. To the best of our knowledge, Algorithm 2 is the first algorithm for CVOPs, which is proven to be finite. Finally, we propose a third algorithm for the purposes of con-vergence analysis (Algorithm 3), where a modified norm-minimizing scalarization is solved in each iteration. This scalarization includes an additional constraint which ensures that the algorithm deals with only a compact subset of the upper image from the beginning. Besides having the finiteness result, Algorithm 3 is the first CVOP algorithm with an estimate of a convergence rate. The experimental results, obtained using some benchmark test problems, show comparable performance of our algorithms with respect to an existing CVOP algorithm based on Pascoletti-Serafini scalarization.