Browsing by Subject "Algorithms."
Now showing 1 - 20 of 22
- Results Per Page
- Sort Options
Item Open Access 3-dimensional median-based algorithms in image sequence processing(1990) Alp, Münire BilgeThis thesis introduces new 3-dimensional median-based algorithms to be used in two of the main research areas in image sequence proc(',ssi,ng; image sequence enhancement and image sequence coding. Two new nonlinear filters are developed in the field of image sequence enhancement. The motion performances and the output statistics of these filters are evaluated. The simulations show that the filters improve the image quality to a large extent compared to other examples from the literature. The second field addressed is image sequence coding. A new 3-dimensional median-based coding and decoding method is developed for stationary images with the aim of good slow motion performance. All the algorithms developed are simulated on real image sequences using a video sequencer.Item Open Access Algorithms for 2 edge connectivity with fixed costs in telecommunications networks(2011) Güzel, UmutIn this thesis, several algorithms are developed in order to provide costeffective and survivable communication in telecommunications networks. In its broadest sense, a survivable network is one which can maintain communication even in the presence of a physical breakdown. There are several ways of providing survivable communication in a given network. Our choice is to hedge against single link failures and provide two edge disjoint paths for every source and destination pair. Each edge in the network is assumed to have a variable unit routing cost and a fixed usage cost. Our objective is the minimization of the total routing cost of the traffic demand and the fixed cost of the utilized links. Several constructive and improvement type heuristics are developed and tested extensively in an experimental design setting.Item Open Access Algorithms for effective querying of graph-based pathway databases(2007) Çetintaş, AhmetAs the scientific curiosity shifts toward system-level investigation of genomicscale information, data produced about cellular processes at molecular level has been accumulating with an accelerating rate. Graph-based pathway ontologies and databases have been in wide use for such data. This representation has made it possible to programmatically integrate cellular networks as well as investigating them using the well-understood concepts of graph theory to predict their structural and dynamic properties. In this regard, it is essential to effectively query such integrated large networks to extract the sub-networks of interest with the help of efficient algorithms and software tools. Towards this goal, we have developed a querying framework along with a number of graph-theoretic algorithms from simple neighborhood queries to shortest paths to feedback loops, applicable to all sorts of graph-based pathway databases from PPIs to metabolic pathways to signaling pathways. These algorithms can also account for compound or nested structures present in the pathway data, and have been implemented within the querying components of Patika (Pathway Analysis Tools for Integration and Knowledge Acquisition) tools and have proven to be useful for answering a number of biologically significant queries for a large graph-based pathway database.Item Open Access Algorithms for the survivable telecommunications network design problem under dedicated protection(2010) Damcı, PelinThis thesis presents algorithms to solve a survivable network design problem arising in telecommunications networks. As a design problem, we seek to find 2-edge disjoint paths between every potential origin destination pair such that the fixed costs of installing edges and the routing costs are jointly minimized. Despite the fact that the survivable network design literature is vast, the particular problem at hand incorporating fixed and variable edge costs as well as different cost structures on the two paths has not been studied. Initially, an IP model addressing the proposed problem is developed. In order to solve problems of higher dimensions, different heuristic algorithms are designed and results of a computational study on a large bed of problem instances are reported.Item Open Access Comparison of two physical optics integration approaches for electromagnetic scattering(2008) Öztürk, EnderA computer program which uses two different Physical Optics (PO) approaches to calculate the Radar Cross Section (RCS) of perfectly conducting planar and spherical structures is developed. Comparison of these approaches is aimed in general by means of accuracy and efficiency. Given the certain geometry, it is first meshed using planar triangles. Then this imaginary surface is illuminated by a plane wave. After meshing, Physical Optics (PO) surface integral is numerically evaluated over the whole illuminated surface. Surface geometry and ratio between dimension of a facet and operating wavelength play a significant role in calculations. Simulations for planar and spherical structures modeled by planar triangles have been made in order to make a good comparison between the approaches. Method of Moments (MoM) solution is added in order to establish the accuracy. Backscattering and bistatic scattering scenarios are considered in simulations. The effect of polarization of incident wave is also investigated for some geometry. Main difference between approaches is in calculation of phase differences. By this study, a comprehensive idea about accuracy and usability due to computation cost is composed for different PO techniques through simulations under different circumstances such as different geometries (planar and curved), different initial polarizations, and different electromagnetic size of facets.Item Open Access Fast algorithms for large 3-D electromagnetic scattering and radiation problems(1997) Şendur, İbrahim KürşatSome interesting real-life radiation and scattering problems are electrically very large and cannot be solved using traditional solution algorithms. Despite the difficulties involved, the solution of these problems usually offer valuable results that are immediately useful in real-life applications. The fast multipole method (FMM) enables the solution of larger problems with existing computational resources by reducing the computational complexity and the memory requirement of the solution without sacrificing the accuracy. This is achieved by replacing the matrix-vector multiplications of O(N^) complexity by a faster equivalent of complexity in each iteration of an iterative scheme. Fast Far-Field Algorithm(FAFFA) further reduces 0{N^) complexity to 0{N^·^). A direct solution would require 0{N^) operations.Item Open Access Feature point classification and matching(2007) Ay, Avşar PolatA feature point is a salient point which can be separated from its neighborhood. Widely used definitions assume that feature points are corners. However, some non-feature points also satisfy this assumption. Hence, non-feature points, which are highly undesired, are usually detected as feature points. Texture properties around detected points can be used to eliminate non-feature points by determining the distinctiveness of the detected points within their neighborhoods. There are many texture description methods, such as autoregressive models, Gibbs/Markov random field models, time-frequency transforms, etc. To increase the performance of feature point related applications, two new feature point descriptors are proposed, and used in non-feature point elimination and feature point sorting-matching. To have a computationally feasible descriptor algorithm, a single image resolution scale is selected for analyzing the texture properties around the detected points. To create a scale-space, wavelet decomposition is applied to the given images and neighborhood scale-spaces are formed for every detected point. The analysis scale of a point is selected according to the changes in the kurtosis values of histograms which are extracted from the neighborhood scale-space. By using descriptors, the detected non-feature points are eliminated, feature points are sorted and with inclusion of conventional descriptors feature points are matched. According to the scores obtained in the experiments, the proposed detection-matching scheme performs more reliable than the Harris detector gray-level patch matching scheme. However, SIFT detection-matching scheme performs better than the proposed scheme.Item Open Access Implementation of new and classical set covering based algorithms for solving the absolute p-center problem(2011) Saç, YiğitThe p-center problem is a model of locating p facilities on a network in order to minimize the maximum coverage distance between each vertex and its closest facility. The main application areas of p-center problem are emergency service locations such as fire and police stations, hospitals and ambulance services. If the p facilities can be located anywhere on a network including vertices and interior points of edges, the resulting problem is referred to as the absolute p-center problem and if they are restricted to vertex locations, it is referred to as the vertex-restricted problem. The absolute p-center problem is considerably more complicated to solve than the vertex-restricted version. In the literature, most of the computational analysis and new algorithm developments are performed through the vertex restricted case of the p-center problem. The absolute p-center problem has received much less attention in the literature. In this thesis, our focus is on the absolute p-center problem based on an algorithm for the p-center problem proposed by Tansel (2009). Our work is the first one to solve large instances up to 900 vertices on the absolute p-center problem. The algorithm focuses on solving the p-center problem with a finite series of minimum set covering problems, but the set covering problems used in the algorithm are constructed differently compared to the ones traditionally used in the literature. The proposed algorithm is applicable for both absolute and vertex-restricted p-center problems with weighted and unweighted cases.Item Open Access Implementation of the backpropagation algorithm on iPSC/2 hypercube multicomputer system(1990) Ercoşkun, DenizBackpropagation is a supervised learning procedure for a class of artificial neural networks. It has recently been widely used in training such neural networks to perform relatively nontrivial tasks like text-to-speech conversion or autonomous land vehicle control. However, the slow rate of convergence of the basic backpropagation algorithm has limited its application to rather small networks since the computational requirements grow significantly as the network size grows. This thesis work presents a parallel implementation of the backpropagation learning algorithm on a hypercube multicomputer system. The main motivation for this implementation is the construction of a parallel training and simulation utility for such networks, so that larger neural network applications can be experimented with.Item Open Access An inquiry into the metrics for evaluation of localization algorithms in wireless ad hoc and sensor networks(2008) Aksu, HidayetIn ad-hoc and sensor networks, the location of a sensor node making an observation is a vital piece of information to allow accurate data analysis. GPS is an established technology to enable precise position information. Yet, resource constraints and size issues prohibit its use in small sensor nodes that are designed to be cost efficient. Instead, most positions are estimated by a number of algorithms. Such estimates, inevitably introduce errors in the information collected from the field, and it is very important to determine the error in cases where they lead to inaccurate data analysis. After all, many components of the application rely on the reported locations including decision making processes. It is, therefore, vital to understand the impact of errors from the applications’ point of view. To date, the focus on location estimation was on individual accuracy of each sensor’s position in isolation to the complete network. In this thesis, we point out the problems with such an approach that does not consider the complete network topology and the relative positions of nodes in comparison to each other. We then describe the existing metrics, which are used in the literature, and also propose some novel metrics that can be used in this area of research. Furthermore, we run simulations to understand the behavior of the existing and proposed metrics. After having discussed the simulation results, we suggest a metric selection methodology that can be used for wireless sensor network applications.Item Open Access Location based multicast routing algorithms for wireless sensor networks(2007) Bağcı, HakkıMulticast routing protocols in wireless sensor networks are required for sending the same message to multiple different destination nodes. Since most of the time it is not convenient to identify the sensors in a network by a unique id, using the location information to identify the nodes and sending messages to the target locations seems to be a better approach. In this thesis we propose two different distributed algorithms for multicast routing in wireless sensor networks which make use of location information of sensor nodes. Our first algorithm groups the destination nodes according to their angular positions and sends a message toward each group in order to reduce the number of total branches in multicast tree which also reduces the number of messages transmitted. Our second algorithm calculates an Euclidean minimum spanning tree at the source node by using the positions of the target nodes. According to the calculated MST, multicast message is forwarded to destination nodes. This helps reducing the total energy consumed for delivering the message to all target nodes since it tries to minimize the number of transmissions. We compare these two algorithms with each other and also against another location based multicast routing protocol called PBM according to success ratio in delivery, number of total transmissions, traffic overhead and average end to end delay metrics. The results show that algorithms we propose are more scalable and energy efficient, so they are good candidates to be used for multicasting in wireless sensor networks.Item Open Access Memory-efficient multilevel physical optics algorithm for the solution of electromagnetic scattering problems(2007) Manyas, Kaplan AlpFor the computation of electromagnetic scattering from electrically large targets, physical optics (PO) technique can provide approximate but very fast solutions. Moreover, higher order approximations, such as physical theory of diffraction (PTD) including the diffraction from the edges or sharp corners can also be added to the PO solution in order to enhance the accuracy of the PO. On the other hand, in real-life radar applications, where the computation of the scattering pattern over a range of frequencies and/or angles with sufficient number of samples is desired, further acceleration may be needed. Multilevel physical optics (MLPO) algorithm can be used for such applications, in which a remarkable speed-up can be achieved by evaluating the PO integral in a multilevel fashion. As the correction terms like PTD are evaluated independently just on the edges or sharp corners, whereas the PO integration is carried out on the entire target surface, PO integration is the dominant factor in the computational time of such higher order approximations. Therefore accelerating the PO integration will also reduce the computational time of such higher order approximations. In this thesis, we propose two different improvements on the MLPO algorithm.First improvement is the modification of the algorithm that enables the solution of the scattering problems involving nonuniform triangulations, thus decreasing the CPU time. Second improvement is the memory-efficient version, in which the O (N3 ) memory requirement is decreased to O (N2 log N). Efficiency of the two proposed improvements are demonstrated in numerical examples including a reallife scattering problem, with which the scattering pattern of a three-dimensional stealth target is evaluated as a function of elevation angle, azimuth angle, and frequency.Item Open Access A New approach in the maximum flow problem(1989) Eren, AysenIn this study, we tried to approach the maximum flow problem from a different point of view. This effort has led us to the development of a new maximum flow algorithm. The algorithm is based on the idea that when initial quasi-flow on each edge of the graph is equated to the upper capacity of the edge, it violates node balance equations, while satisfying capacity and non-negativity constraints. In order to obtain a feasible and optimum flow, quasi-flow on some of the edges have to be reduced. Given an initial quasi-flow, positive and negative excess, and, balanced nodes are determined. Algorithm reduces excesses of unbalanced nodes to zero by finding residual paths joining positive excess nodes to negative excess nodes and sending excesses along these paths. Minimum cut is determined first, and then maximum flow of the given cut is found. Time complexity of the algorithm is o(n^m). The application of the modified version of the Dynamic Tree structure of Sleator and Tarjan reduces it to o(nmlogn).Item Open Access Non-interior piecewise-linear pathways to l-infinity solutions of overdetermined linear systems(1996) Elhedhli, SamirIn this thesis, a new characterization of solutions to overdetermined systems of linear equations is described based on a simple quadratic penalty function, which is used to change the problem into an unconstrained one. Piecewiselinear non-interior pathways to the set of optimal solutions are generated from the minimization of the unconstrained function. It is shown that the entire set of solutions is obtained from the paths for sufficiently small values of a scalar parameter. As a consequence, a new finite penalty algorithm is given for fx, problems. The algorithm is implemented and exhaustively tested using random and function approximation problems. .A comparison with the Barrodale-Phillips algorithm is also done. The results indicate that the new algorithm shows promising performance on random (non-function approximation) problems.Item Open Access Out-of-core implementation of the parallel multilevel fast multipole algorithm(2013) Karaosmanoğlu, BarışcanWe developed an out-of-core (OC) implementation of the parallel multilevel fast multipole algorithm (MLFMA) to solve electromagnetic problems with reduced memory. The main purpose of the OC method is to reduce in-core memory (primary storage) by using mass storage (secondary storage) units. Depending on the OC implementation, the in-core data may be left in one piece or divided into partitions. If the latter, the partitions are written out into mass storage unit(s) and read into in-core memory when required. In this way, memory reduction is achieved. However, the proposed method causes time delays because reading and writing large data using massive storage units is a long procedure. In our case, repetitive access to data partitions from the mass storage increases the total time of the iterative solution part of MLFMA. Such time delays can be minimized by selecting the right data type and optimizing the sizes of the data partitions. We run the optimization tests on different types of mass storage devices, such as hard disks and solid state drives. This thesis explores OC implementation of the parallel MLFMA. To be more precise, it presents the results of optimization tests done on different partition sizes and shows how computation time is minimized despite the time delays. This thesis also presents full-wave solutions of scattering problems including hundreds of millions of unknowns by employing an OC-implemented parallel MLFMA.Item Open Access Parallelization of an interior point algorithm for linear programming(1994) Simitçi, HüseyinIn this study, we present the parallelization of Mehrotra’s predictor-corrector interior point algorithm, which is a Karmarkar-type optimization method for linear programming. Computation types needed by the algorithm are identified and parallel algorithms for each type are presented. The repeated solution of large symmetric sets of linear equations, which constitutes the major computational effort in Karmarkar-type algorithms, is studied in detail. Several forward and backward solution algorithms are tested, and buffered backward solution algorithm is developed. Heurustic bin-packing algorithms are used to schedule sparse matrix-vector product and factorization operations. Algorithms having the best performance results are used to implement a system to solve linear programs in parallel on multicomputers. Design considerations and implementation details of the system are discussed, and performance results are presented from a number of real problems.Item Open Access Robust adaptive filtering algorithms for impulsive noise environments(1996) Aydin, GülIn this thesis, robust adaptive filtering algorithms are introduced for impulsive noise environments which can be modeled as o;-stable distributions and/or c-contarninated Gaussian distributions. The algorithms are devcrloped using the Fractional Lower Order Statistics concept. Robust perf()rrnance is obtained.Item Open Access Row generation techniques for approximate solution of linear programming problems(2010) Paç, A. BurakIn this study, row generation techniques are applied on general linear programming problems with a very large number of constraints with respect to the problem dimension. A lower bound is obtained for the change in the objective value caused by the generation of a specific row. To achieve row selection that results in a large shift in the feasible region and the objective value at each row generation iteration, the lower bound is used in the comparison of row generation candidates. For a warm-start to the solution procedure, an effective selection of the subset of constraints that constitutes the initial LP is considered. Several strategies are discussed to form such a small subset of constraints so as to obtain an initial solution close to the feasible region of the original LP. Approximation schemes are designed and compared to make possible the termination of row generation at a solution in the proximity of an optimal solution of the input LP. The row generation algorithm presented in this study, which is enhanced with a warm-start strategy and an approximation scheme is implemented and tested for computation time and the number of rows generated. Two efficient primal simplex method variants are used for benchmarking computation times, and the row generation algorithm appears to perform better than at least one of them especially when number of constraints is large.Item Open Access Shortest path problem with re-routing en-route(2008) Karakaya, BanuIn this study, we examine the shortest path problem under the possibility of “re-routing” when an arc that is being traversed is blocked due to reasons such as road and weather conditions, congestion, accidents etc. If an incident occurs along the arc being traversed, the vehicle either waits until all effects of the incident are cleared and then follows the same path thereafter, or returns to the starting node of that arc and follows an escape route to the destination node, the latter course of action is called as “re-routing”. Also, we consider that this arc is not visited again throughout the travel along the network when an incident occurs and the alternative of not following this arc after the event is chosen. We propose a labeling algorithm to solve this specific problem. Then, a real case problem is analyzed by the proposed algorithm and several numerical studies are conducted in order to assess the sensitivity of the probability and travel time parameters.Item Open Access Signal and image processing algorithms using interval convex programming and sparsity(2012) Köse, KıvançIn this thesis, signal and image processing algorithms based on sparsity and interval convex programming are developed for inverse problems. Inverse signal processing problems are solved by minimizing the ℓ1 norm or the Total Variation (TV) based cost functions in the literature. A modified entropy functional approximating the absolute value function is defined. This functional is also used to approximate the ℓ1 norm, which is the most widely used cost function in sparse signal processing problems. The modified entropy functional is continuously differentiable, and convex. As a result, it is possible to develop iterative, globally convergent algorithms for compressive sensing, denoising and restoration problems using the modified entropy functional. Iterative interval convex programming algorithms are constructed using Bregman’s D-Projection operator. In sparse signal processing, it is assumed that the signal can be represented using a sparse set of coefficients in some transform domain. Therefore, by minimizing the total variation of the signal, it is expected to realize sparse representations of signals. Another cost function that is introduced for inverse problems is the Filtered Variation (FV) function, which is the generalized version of the Total Variation (VR) function. The TV function uses the differences between the pixels of an image or samples of a signal. This is essentially simple Haar filtering. In FV, high-pass filter outputs are used instead of differences. This leads to flexibility in algorithm design adapting to the local variations of the signal. Extensive simulation studies using the new cost functions are carried out. Better experimental restoration, and reconstructions results are obtained compared to the algorithms in the literature