Browsing by Subject "Decomposition"
Now showing 1 - 17 of 17
- Results Per Page
- Sort Options
Item Embargo A decomposable branch-and-price formulation for optimal classification trees(2024-07) Yöner, Elif RanaConstruction of Optimal Classification Trees (OCTs) using mixed-integer programs, is a promising approach as it returns a tree with minimum classification error. Yet solving integer programs to optimality is known to be computationally costly, especially as the size of the instance and the depth of the tree grow, calling for efficient solution methods. Our research presents a new, decomposable model which lends itself to efficient solution algorithms such as Branch-and-Price. We model the classification tree using a “patternbased” formulation, deciding which feature should be used to split data at each branching node of each leaf. Our results are promising, illustrating the potential of decomposition in the domain of binary OCTs.Item Open Access Coding of fingerprint images using binary subband decomposition and vector quantization(SPIE, 1998-01) Gerek, Ömer N.; Çetin, A. EnisIn this paper, compression of binary digital fingerprint images is considered. High compression ratios for fingerprint images is essential for handling huge amount of images in databases. In our method, the fingerprint image is first processed by a binary nonlinear subband decomposition filter bank and the resulting subimages are coded using vector quantizers designed for quantizing binary images. It is observed that the discriminating properties of the fingerprint, images are preserved at very low bit rates. Simulation results are presented.Item Open Access Computational methods for CTMCs(John Wiley & Sons, 2011) Dayar, Tuğrul; Stewart, W. J.; Cochran, J. J.; Cox, L. A.; Keskinocak, P.; Kharoufeh, J. P.; Smith, J. C.This article concerns the computation of stationary and transient distributions of continuous‐time Markov chains (CTMCs). Once the problem has been formulated, it is shown how computational methods for computing stationary distributions of discrete‐time Markov chains can be applied in the continuous‐time case. This is not so for the case of transient distributions, which turns out to be a much more difficult problem in general. Different approaches to computing transient distributions of CTMCs are explored, from the simple and efficient uniformization method, through matrix decomposition and powering techniques, to ordinary differential equation (ODE) solvers. This latter approach is the only one currently available for nonhomogeneous CTMCs. The basic concept is explained using simple Euler methods, but formulae for more advanced and efficient single‐step Runge–Kutta and implicit multistep BDF methods are provided.Item Open Access A decomposition approach for undiscounted two-person zero-sum stochastic games(Springer-Verlag Berlin Heidelberg, 1999) Avşar, Z. M.; Baykal-Gürsoy, M.Two-person zero-sum stochastic games are considered under the long-run average expected payoff criterion. State and action spaces are assumed finite. By making use of the concept of maximal communicating classes, the following decomposition algorithm is introduced for solving two-person zero-sum stochastic games: First, the state space is decomposed into maximal communicating classes. Then, these classes are organized in an hierarchical order where each level may contain more than one maximal communicating class. Best stationary strategies for the states in a maximal communicating class at a level are determined by using the best stationary strategies of the states in the previous levels that are accessible from that class. At the initial level, a restricted game is defined for each closed maximal communicating class and these restricted games are solved independently. It is shown that the proposed decomposition algorithm is exact in the sense that the solution obtained from the decomposition procedure gives the best stationary strategies for the original stochastic game.Item Open Access Decompositional analysis of Kronecker structured Markov chains(Kent State University, 2008) Bao, Y.; Bozkur, I. N.; Dayar, T.; Sun, X.; Trivedi, K. S.This contribution proposes a decompositional iterative method with low memory requirements for the steadystate analysis ofKronecker structured Markov chains. The Markovian system is formed by a composition of subsystems using the Kronecker sum operator for local transitions and the Kronecker product operator for synchronized transitions. Even though the interactions among subsystems, which are captured by synchronized transitions, need not be weak, numerical experiments indicate that the solver benefits considerably from weak interactions among subsystems, and is to be recommended specifically in this case. © 2008, Kent State University.Item Open Access Deterministic and stochastic team formation problems(2021-01) Berktaş, NihalIn various organizations, physical or virtual teams are formed to perform jobs that require different skills. The success of a team depends on the technical capabilities of the team members as well as the quality of communication among the team members. We study different variants of the team formation problem where the goal is to build the best team with respect to given criteria. First, we study a deterministic team formation problem which aims to construct a capable team that can communicate and collaborate effectively. To measure the quality of communication, we assume the candidates constitute a social network and we define a cost of communication using the proximity of people in the social network. We minimize the sum of all pairwise communication costs, and we impose an upper bound on the largest communication cost. This problem is formulated as a constrained quadratic set covering problem. Our experiments show that a general-purpose solver is capable of solving small and medium-sized instances to optimality. We propose a branch-and-bound algorithm to solve larger sizes: we reformulate the problem and relax it in such a way that it decomposes into a series of linear set covering problems, and we impose the relaxed constraints through branching. Our computational experiments show that the algorithm is capable of solving large-sized instances, which are intractable for the solver. Second, we consider a two-stage stochastic team formation problem where the objective is to minimize the expected communication cost of the team. We as-sume that for a subset of pairs the communication costs are uncertain but they have a known discrete distribution. The first stage is a trial stage where the decision-maker chooses a limited number of pairs from this subset. The actual cost values of the chosen pairs are realized before the second stage. Hence, the uncertainty in this problem is decision-dependent, also called endogenous, be-cause the first stage decisions determine for which parameters the uncertainty will resolve. For this problem, we give two formulations, the first one contains a set of non-anticipativity constraints similar to the models in the related lit-erature. In the second, we are able to eliminate these constraints by changing the objective function into a quadratic one, which is linearized by a set of extra binary variables. We show that the size of instances we can solve with these for-mulations using a commercial solver is limited. Therefore, we develop a Benders’ decomposition-based branch-and-cut algorithm that exploits decision-dependent nature to partition scenarios and use tight linear relaxations to obtain strong cuts. We show the efficiency of the algorithm presenting results of experiments conducted with randomly generated instances. Finally, we study a multi-stage team formation problem where the objective is to minimize the monetary cost including hiring and outsourcing costs. In this problem, stages correspond to projects which are carried out consecutively. Each project consists of several tasks each of which requires a human resource. We assume that due to incomplete information there is uncertainty in people’s performances and consequently the time a person needs to complete a task is random for some person-task pairs. When a person is assigned to a task, we learn how long it takes for this person to finish the task. Hence, the uncertainty is again decision-dependent. If the duration of a task exceeds the allowable time for a project then the manager must hire an external resource to speed up the process. We present an integer programming formulation to this problem and explain that the size of the formulation strongly depends on the number of random parameters and scenarios. While this deterministic equivalent formulation can be solved with a commercial solver for small-sized instances, it easily becomes intractable when the number of random parameters increases by one. For such cases where exact methods are not promising, we investigate heuristic methods to obtain tight bounds and near-optimal solutions. In the related literature, different Lagrangian decomposition methods are developed for such stochastic problems. In this study, we show that the convergence of existing methods is very slow, and we propose an alternative method where a relaxation of the formulation is solved by a decomposition-based branch-and-bound algorithm.Item Open Access Distributed k-Core view materialization and maintenance for large dynamic graphs(Institute of Electrical and Electronics Engineers, 2014-10) Aksu, H.; Canim, M.; Chang, Yuan-Chi; Korpeoglu, I.; Ulusoy, O.In graph theory, k-core is a key metric used to identify subgraphs of high cohesion, also known as the ‘dense’ regions of a graph. As the real world graphs such as social network graphs grow in size, the contents get richer and the topologies change dynamically, we are challenged not only to materialize k-core subgraphs for one time but also to maintain them in order to keep up with continuous updates. Adding to the challenge is that real world data sets are outgrowing the capacity of a single server and its main memory. These challenges inspired us to propose a new set of distributed algorithms for k-core view construction and maintenance on a horizontally scaling storage and computing platform. Our algorithms execute against the partitioned graph data in parallel and take advantage of k-core properties to aggressively prune unnecessary computation. Experimental evaluation results demonstrated orders of magnitude speedup and advantages of maintaining k-core incrementally and in batch windows over complete reconstruction. Our algorithms thus enable practitioners to create and maintain many k-core views on different topics in rich social network content simultaneously.Item Open Access Distributed scheduling: a review of concepts and applications(Taylor & Francis, 2010) Toptal, A.; Sabuncuoglu, I.Distributed scheduling (DS) is an approach that enables local decision makers to create schedules that consider local objectives and constraints within the boundaries of the overall system objectives. Local decisions from different parts of the system are then integrated through coordination and communication mechanisms. Distributed scheduling attracts the interest of many researchers from a variety of disciplines, such as computer science, economics, manufacturing, and service operations management. One reason is that the problems faced in this area include issues ranging from information architectures, to negotiation mechanisms, to the design of scheduling algorithms. In this paper, we provide a survey and a critical analysis of the literature on distributed scheduling. While we propose a comprehensive taxonomy that accounts for many factors related to distributed scheduling, we also analyse the body of research in which the scheduling aspect is rigorously discussed. The focus of this paper is to review the studies that concern scheduling algorithms in a distributed architecture, not, for example, protocol languages or database architectures. The contribution of this paper is twofold: to unify the literature within our scope under a common terminology and to determine the critical design factors unique to distributed scheduling and in relation to centralised scheduling.Item Open Access Flora: a framework for decomposing software architecture to introduce local recovery(John Wiley & Sons Ltd., 2009-07) Sözer, H.; Tekinerdoǧan, B.; Akşit, M.The decomposition of software architecture into modular units is usually driven by the required quality concerns. In this paper we focus on the impact of local recovery concern on the decomposition of the software system. For achieving local recovery, the system needs to be decomposed into separate units that can be recovered in isolation. However, it appears that this required decomposition for recovery is usually not aligned with the decomposition based on functional concerns. Moreover, introducing local recovery to a software system, while preserving the existing decomposition, is not trivial and requires substantial development and maintenance effort. To reduce this effort we propose a framework that supports the decomposition and implementation of software architecture for local recovery. The framework provides reusable abstractions for defining recoverable units and the necessary coordination and communication protocols for recovery. We discuss our experiences in the application and evaluation of the framework for introducing local recovery to the open-source media player called MPlayer. Copyright 2009 John Wiley & Sons, Ltd.Item Open Access A Lagrangean relaxation and decomposition algorithm for the video placement and routing problem(Elsevier, 2007) Bektaş, T.; Oǧuz, O.; Ouveysi, I.Video on demand (VoD) is a technology used to provide a number of programs to a number of users on request. In developing a VoD system, a fundamental problem is load balancing, which is further characterized by optimally placing videos to a number of predefined servers and routing the user program requests to available resources. In this paper, an exact solution algorithm is described to solve the video placement and routing problem. The algorithm is based on Lagrangean relaxation and decomposition. The novelty of the approach can be described as the use of integer programs to obtain feasible solutions in the algorithm. Computational experimentation reveals that for randomly generated problems with up to 100 nodes and 250 videos, the use of such integer programs help greatly in obtaining good quality solutions (typically within 5% of the optimal solution), even in the very early iterations of the algorithm.Item Open Access Lot sizing with perishable items(2019-07) Arslan, NazlıcanWe address the uncapacitated lot sizing problem for a perishable item that has a deterministic and fixed lifetime. In the first part of the study, we assume that the demand is also deterministic. We conduct a polyhedral analysis and derive valid inequalities to strengthen the LP relaxation. We develop a separation algorithm for the valid inequalities and propose a branch and cut algorithm to solve the problem. We conduct a computational study to test the effiectiveness of the valid inequalities. In the second part, we study the multistage stochastic version of the problem where the demand is uncertain. We use the valid inequalities we found for the deterministic problem to strengthen the LP relaxation of the stochastic problem and test their effiectiveness. As the size of the stochastic model grows exponentially in the number of periods, we also implement a decomposition method based on scenario grouping to obtain lower and upper bounds.Item Open Access Molecular entrapment of volatile organic compounds (VOCs) by electrospun cyclodextrin nanofibers(Elsevier, 2016-02) Celebioglu A.; Sen, H. S.; Durgun, Engin; Uyar, TamerIn this paper, we reported the molecular entrapment performance of hydroxypropyl-beta-cyclodextrin (HPβCD) and hydroxypropyl-gamma-cyclodextrin (HPγCD) electrospun nanofibers (NF) for two common volatile organic compounds (VOCs); aniline and benzene. The encapsulation efficiency of CD samples were investigated depending on the various factors such as; CD form (NF and powder), electrospinning solvent (DMF and water), CD (HPβCD and HPγCD) and VOCs (aniline and benzene) types. BET analysis indicated that, electrospun CD NF have higher surface area compared to their powder form. In addition DMA measurement provided information about the mechanical properties of CD NF. The encapsulation capability of CD NF and CD powder was investigated by 1H-NMR and HPLC techniques. The observed results suggested that, CD NF can entrap higher amount of VOCs from surroundings compared to their powder forms. Besides, molecular entrapment efficiency of CD NF also depends on CD, solvent and VOCs types. The inclusion complexation between CD and VOCs was determined by using TGA technique, from the higher decomposition temperature of VOCs. Finally, our results were fortified by the modeling studies which indicated the complexation efficiency variations between CD and VOC types. Here, the inclusion complexation ability of CD molecules was combined with very high surface area and versatile features of CD NF. So these findings revealed that, electrospun CD NF can serve as useful filtering material for air filtration purposes due to their molecular entrapment capability of VOCs.Item Open Access A novel optimization algorithm for video placement and routing(Institute of Electrical and Electronics Engineers, 2006) Bektaş, T.; Oǧuz, O.; Ouveysi, I.In this paper, we propose a novel optimization algorithm for the solution of the video placement and routing problem based on Lagrangean relaxation and decomposition. The main contribution can be stated as the use of integer programming models to obtain feasible solutions to the problem within the algorithm. Computational experimentation reveals that the use of such integer models help greatly in obtaining good quality solutions in a small amount of solution time.Item Open Access Parafac-spark: parallel tensor decompositions on spark(2019-08) Bekçe, Selim ErenTensors are higher order matrices, widely used in many data science applications and scienti c disciplines. The Canonical Polyadic Decomposition (also known as CPD/PARAFAC) is a widely adopted tensor factorization to discover and extract latent features of tensors usually applied via alternating squares (ALS) method. Developing e cient parallelization methods of PARAFAC on commodity clusters is important because as common tensor sizes reach billions of nonzeros, a naive implementation would require infeasibly huge intermediate memory sizes. Implementations of PARAFAC-ALS on shared and distributedmemory systems are available, but these systems require expensive cluster setups, are too low level, not compatible with modern tooling and not fault tolerant by design. Many companies and data science communities widely prefer Apache Spark, a modern distributed computing framework with in-memory caching, and Hadoop ecosystem of tools for their ease of use, compatibility, ability to run on commodity hardware and fault tolerance. We developed PARAFAC-SPARK, an e cient, parallel, open-source implementation of PARAFAC on Spark, written in Scala. It can decompose 3D tensors stored in common coordinate format in parallel with low memory footprint by partitioning them as grids and utilizing compressed sparse rows (CSR) format for e cient traversals. We followed and combined many of the algorithmic and methodological improvements of its predecessor implementations on Hadoop and distributed memory, and adapted them for Spark. During the kernel MTTKRP operation, by applying a multi-way dynamic partitioning scheme, we were also able to increase the number of reducers to be on par with the number of cores to achieve better utilization and reduced memory footprint. We ran PARAFAC-SPARK with some real world tensors and evaluated the e ectiveness of each improvement as a series of variants compared with each other, as well as with some synthetically generated tensors up to billions of rows to measure its scalability. Our fastest variant (PS-CSRSX ) is up to 67% faster than our baseline Spark implementation (PS-COO) and up to 10 times faster than the state of art Hadoop implementations.Item Open Access Radio communications interdiction problem(2020-01) Tanergüçlü, TürkerTactical communications have always played a pivotal role in maintaining effective command and control of troops operating in hostile, extremely fragile and dynamic battlefield environments. Radio communications, in particular, have served as the backbone of the tactical communications over the years and have proven to be very useful in meeting the information exchange needs of widely dispersed and highly mobile military units, especially in the rugged area. Considering the complexity of today’s modern warfare, and in particular the emerging threats from the latest electronic warfare technologies, the need for optimally designed radio communications networks is more critical than ever. Optimized communication network planning can minimize network vulnerabilities to modern threats and provide additional assurance of continued availability and reliability of tactical communications. To do so, we present the Radio Communications Interdiction Problem (RCIP) to identify the optimal locations of transmitters on the battlefield that will lead to a robust radio communications network by anticipating the degrading effects of intentional radio jamming attacks used by an adversary during electronic warfare. We formulate RCIP as a binary bilevel (max–min) programming problem, present the equivalent single level formulation, and propose an exact solution method using a decomposition scheme. We enhance the performance of the algorithm by utilizing dominance relations, preprocessing, and initial starting heuristics. To reflect a more realistic jamming representation, we introduce the probabilistic version of RCIP (P-RCIP) where a jamming probability is associated at each receiver site as a function of the prevalent jamming to signal ratios leading to an expected coverage of receivers as an objective function. We approximate the nonlinearity in the jamming probability function using a piecewise linear convex function and solve this version by adapting the decomposition algorithm constructed for RCIP. Our extensive computational results on realistic scenarios that reflect different phases of a military conflict show the efficacy of the proposed solution methods. We provide valuable tactical insights by analyzing optimal solutions on these scenarios under varying parameters. Finally, we investigate the incorporation of limited artillery assets into communications planning by formulizing RCIP with Artillery (RCIP-A) as a trilevel optimization problem and propose a nested decomposition method as an exact solution methodology. Additionally, we present computational results and tactical insights obtained from the solution of RCIP-A on predefined scenarios.Item Open Access Substrate temperature influence on the properties of GaN thin films grown by hollow-cathode plasma-assisted atomic layer deposition(AIP Publishing LLC, 2016-02) Alevli, M.; Gungor, N.; Haider A.; Kizir S.; Leghari, S. A.; Bıyıklı, NecmiGallium nitride films were grown by hollow cathode plasma-assisted atomic layer deposition using triethylgallium and N2/H2 plasma. An optimized recipe for GaN film was developed, and the effect of substrate temperature was studied in both self-limiting growth window and thermal decomposition-limited growth region. With increased substrate temperature, film crystallinity improved, and the optical band edge decreased from 3.60 to 3.52 eV. The refractive index and reflectivity in Reststrahlen band increased with the substrate temperature. Compressive strain is observed for both samples, and the surface roughness is observed to increase with the substrate temperature. Despite these temperature dependent material properties, the chemical composition, E1(TO), phonon position, and crystalline phases present in the GaN film were relatively independent from growth temperature.Item Open Access Synthesis and characterization of iron oxide derivatized mutant cowpea mosaic virus hybrid nanoparticles(Wiley - VCH Verlag GmbH & Co. KGaA, 2008) Martinez-Morales, A. A.; Portney, N. G.; Zhang, Y.; Destito, G.; Budak, G.; Özbay, Ekmel; Manchester, M.; Ozkan, C. S.; Ozkan, M.The enhanced local magnetic field strength was qualitatively analyzed by magnetic force microscopy (MFM), demonstrating a characteristic advantage for attaching derivatized magnetic iron oxide (IO) nanoparticles in an organic medium. the synthesis of 11 nm size IO nanoparticles was carried out under nitrogen atmosphere using standard schlenk technique. The biocompatible γ-Fe2O3-COOH nanoparticles were synthesized by thermal decomposition of Fe(CO)5 and surface modified. Atomic force microscopy (AFM) was used to characterize structurally the as-synthesized IO nanoparticles on a silicon substrate. The histogram of the size distribution of the IO nanoparticles determined from 68 individual measurements on single IO nanoparticles exhibited a mean size of δ11 nm. MFM showed that the textured regions observed on each hybrid are indicative of IO nanoclusters decorating the surface of single virions.