Browsing by Subject "Algorithms"
Now showing 1 - 20 of 432
- Results Per Page
- Sort Options
Item Open Access 2-D adaptive prediction based Gaussianity tests in microcalcification detection(SPIE, 1998-01) Gürcan, M. Nafi; Yardımcı, Yasemin; Çetin, A. EnisWith increasing use of Picture Archiving and Communication Systems (PACS), Computer-aided Diagnosis (CAD) methods will be more widely utilized. In this paper, we develop a CAD method for the detection of microcalcification clusters in mammograms, which are an early sign of breast cancer. The method we propose makes use of two-dimensional (2-D) adaptive filtering and a Gaussianity test recently developed by Ojeda et al. for causal invertible time series. The first step of this test is adaptive linear prediction. It is assumed that the prediction error sequence has a Gaussian distribution as the mammogram images do not contain sharp edges. Since microcalcifications appear as isolated bright spots, the prediction error sequence contains large outliers around microcalcification locations. The second step of the algorithm is the computation of a test statistic from the prediction error values to determine whether the samples are from a Gaussian distribution. The Gaussianity test is applied over small, overlapping square regions. The regions, in which the Gaussianity test fails, are marked as suspicious regions. Experimental results obtained from a mammogram database are presented.Item Open Access Accelerated phase-cycled SSFP imaging with compressed sensing(Institute of Electrical and Electronics Engineers Inc., 2015) Çukur, T.Balanced steady-state free precession (SSFP) imaging suffers from irrecoverable signal losses, known as banding artifacts, in regions of large B0 field inhomogeneity. A common solution is to acquire multiple phase-cycled images each with a different frequency sensitivity, such that the location of banding artifacts are shifted in space. These images are then combined to alleviate signal loss across the entire field-of-view. Although high levels of artifact suppression are viable using a large number of images, this is a time costly process that limits clinical utility. Here, we propose to accelerate individual acquisitions such that the overall scan time is equal to that of a single SSFP acquisition. Aliasing artifacts and noise are minimized by using a variable-density random sampling pattern in k-space, and by generating disjoint sampling patterns for separate acquisitions. A sparsity-enforcing method is then used for image reconstruction. Demonstrations on realistic brain phantom images, and in vivo brain and knee images are provided. In all cases, the proposed technique enables robust SSFP imaging in the presence of field inhomogeneities without prolonging scan times. © 2014 IEEE.Item Open Access Accuracy and efficiency considerations in the solution of extremely large electromagnetics problems(IEEE, 2011) Gürel, Levent; Ergül, ÖzgürThis study considers fast and accurate solutions of extremely large electromagnetics problems. Surface formulations of large-scale objects lead to dense matrix equations involving millions of unknowns. Thanks to recent developments in parallel algorithms and high-performance computers, these problems can easily be solved with unprecedented levels of accuracy and detail. For example, using a parallel implementation of the multilevel fast multipole algorithm (MLFMA), we are able to solve electromagnetics problems discretized with hundreds of millions of unknowns. Unfortunately, as the problem size grows, it becomes difficult to assess the accuracy and efficiency of the solutions, especially when comparing different implementations. This paper presents our efforts to solve extremely large electromagnetics problems with an emphasis on accuracy and efficiency. We present a list of benchmark problems, which can be used to compare different implementations for large-scale problems. © 2011 IEEE.Item Open Access Active pixel merging on hypercube multicomputers(Springer, Berlin, Heidelberg, 1996) Kurç, Tahsin M.; Aykanat, Cevdet; Özgüç, BülentThis paper presents algorithms developed for pixel merging phase of object-space parallel polygon rendering on hypercube-connected multicomputers. These algorithms reduce volume of communication in pixel merging phase by only exchanging local foremost pixels. In order to avoid message fragmentation, local foremost pixels should be stored in consecutive memory locations. An algorithm, called modified seanline z-buffer, is proposed to store local foremost pixels efficiently. This algorithm also avoids the initialization of scanline z-buffer for each scanline on the screen. Good processor utilization is achieved by subdividing the image-space among the processors in pixel merging phase. Efficient algorithms for load balancing in the pixel merging phase are also proposed and presented. Experimental results obtained on a 16-processor Intel's iPSC/2 hypercube multicomputer are presented. © Springer-Verlag Berlin Heidelberg 1996.Item Open Access Adaptation of multiway-merge sorting algorithm to MIMD architectures with an experimental study(2002) Cantürk, LeventSorting is perhaps one of the most widely studied problems of computing. Numerous asymptotically optimal sequential algorithms have been discovered. Asymptotically optimal algorithms have been presented for varying parallel models as well. Parallel sorting algorithms have already been proposed for a variety of multiple instruction, multiple data streams (MIMD) architectures. In this thesis, we adapt the multiwaymerge sorting algorithm that is originally designed for product networks, to MIMD architectures. It has good load balancing properties, modest communication needs and well performance. The multiway-merge sort algorithm requires only two all-to-all personalized communication (AAPC) and two one-to-one communications independent from the input size. In addition to evenly distributed load balancing, the algorithm requires only size of 2N/P local memory for each processor in the worst case, where N is the number of items to be sorted and P is the number of processors. We have implemented the algorithm on the PC Cluster that is established at Computer Engineering Department of Bilkent University. To compare the results we have implemented a sample sort algorithm (PSRS Parallel Sorting by Regular Sampling) by X. Liu et all and a parallel quicksort algorithm (HyperQuickSort) on the same cluster. In the experimental studies we have used three different benchmarks namely Uniformly, Gaussian, and Zero distributed inputs. Although the multiwaymerge algorithm did not achieve better results than the other two, which are theoretically cost optimal algorithms, there are some cases that the multiway-merge algorithm outperforms the other two like in Zero distributed input. The results of the experiments are reported in detail. The multiway-merge sort algorithm is not necessarily the best parallel sorting algorithm, but it is expected to achieve acceptable performance on a wide spectrum of MIMD architectures.Item Open Access Adaptive filtering approaches for non-Gaussian stable processes(IEEE, 1995-05) Arıkan, Orhan; Belge, Murat; Çetin, A. Enis; Erzin, EnginA large class of physical phenomenon observed in practice exhibit non-Gaussian behavior. In this paper, α-stable distributions, which have heavier tails than Gaussian distribution, are considered to model non-Gaussian signals. Adaptive signal processing in the presence of such kind of noise is a requirement of many practical problems. Since, direct application of commonly used adaptation techniques fail in these applications, new approaches for adaptive filtering for α-stable random processes are introduced.Item Open Access Adaptive filtering for non-gaussian stable processes(IEEE, 1994) Arıkan, Orhan; Çetin, A. Enis; Erzin, E.A large class of physical phenomenon observed in practice exhibit non-Gaussian behavior. In this letter, a-stable distributions, which have heavier tails than Gaussian distribution, are considered to model non-Gaussian signals. Adaptive signal processing in the presence of such a noise is a requirement of many practical problems. Since direct application of commonly used adaptation techniques fail in these applications, new algorithms for adaptive filtering for α-stable random processes are introduced.Item Open Access Adaptive routing framework for network on chip architectures(ACM, 2016-01) Mustafa, Naveed Ul; Öztürk, Özcan; Niar, S.In this paper we suggest and demonstrate the idea of applying multiple routing algorithms during the execution of a real application mapped on a Network-on-Chip (NoC). Traffic pattern of a real application may change during its execution. As performance of an algorithm depends on the traffic pattern, using the same routing algorithm for the entire span of execution may be inefficient. We study the feasibility of this idea for applications such as SPARSE and MPEG-4 decoder, by applying different routing algorithms. By applying more than one routing algorithms, throughput improves up to 17.37% and 6.74% in the case of SPARSE and MPEG-4 decoder applications, respectively, as compared to the application of single routing algorithm. © 2016 ACM.Item Open Access Adaptive tracking of narrowband HF channel response(Wiley-Blackwell Publishing, 2003) Arikan, F.; Arıkan, OrhanEstimation of channel impulse response constitutes a first step in computation of scattering function, channel equalization, elimination of multipath, and optimum detection and identification of transmitted signals through the HF channel. Due to spatial and temporal variations, HF channel impulse response has to be estimated adaptively. Based on developed state-space and measurement models, an adaptive Kalman filter is proposed to track the HF channel variation in time. Robust methods of initialization and adaptively adjusting the noise covariance in the system dynamics are proposed. In simulated examples under good, moderate and poor ionospheric conditions, it is observed that the adaptive Kalman filter based channel estimator provides reliable channel estimates and can track the variation of the channel in time with high accuracy.Item Open Access An adaptive, energy-aware and distributed fault-tolerant topology-control algorithm for heterogeneous wireless sensor networks(Elsevier BV, 2016) Deniz, F.; Bagci, H.; Korpeoglu, I.; Yazıcı A.This paper introduces an adaptive, energy-aware and distributed fault-tolerant topology-control algorithm, namely the Adaptive Disjoint Path Vector (ADPV) algorithm, for heterogeneous wireless sensor networks. In this heterogeneous model, we have resource-rich supernodes as well as ordinary sensor nodes that are supposed to be connected to the supernodes. Unlike the static alternative Disjoint Path Vector (DPV) algorithm, the focus of ADPV is to secure supernode connectivity in the presence of node failures, and ADPV achieves this goal by dynamically adjusting the sensor nodes' transmission powers. The ADPV algorithm involves two phases: a single initialization phase, which occurs at the beginning, and restoration phases, which are invoked each time the network's supernode connectivity is broken. Restoration phases utilize alternative routes that are computed at the initialization phase by the help of a novel optimization based on the well-known set-packing problem. Through extensive simulations, we demonstrate that ADPV is superior in preserving supernode connectivity. In particular, ADPV achieves this goal up to a failure of 95% of the sensor nodes; while the performance of DPV is limited to 5%. In turn, by our adaptive algorithm, we obtain a two-fold increase in supernode-connected lifetimes compared to DPV algorithm.Item Open Access Algebraic acceleration and regularization of the source reconstruction method with the recompressed adaptive cross approximation(IEEE, 2014) Kazempour, Mahdi; Gürel, LeventWe present a compression algorithm to accelerate the solution of source reconstruction problems that are formulated with integral equations and defined on arbitrary three-dimensional surfaces. This compression technique benefits from the adaptive cross approximation (ACA) algorithm in the first step. A further error-controllable recompression is applied after the ACA. The numerical results illustrate the efficiency and accuracy of the proposed method. © 2014 IEEE.Item Open Access An algorithm based on facial decomposition for finding the efficient set in multiple objective linear programming(Elsevier, 1996) Sayın, S.We propose a method for finding the efficient set of a multiple objective linear program based on the well-known facial decomposition of the efficient set. The method incorporates a simple linear programming test that identifies efficient faces while employing a top-down search strategy which avoids enumeration of efficient extreme points and locates the maximally efficient faces of the feasible region. We suggest that discrete representations of the efficient faces could be obtained and presented to the Decision Maker. Results of computational experiments are reported.Item Open Access Algorithms and basis functions in tomographic reconstruction of ionospheric electron density(IEEE, 2005) Yavuz, E.; Arıkan, F.; Arıkan, Orhan; Erol, C. B.Computerized ionospheric tomography (CIT) is a method to investigate ionosphere electron density in two or three dimensions. This method provides a flexible tool for studying ionosphere. Earth based receivers record signals transmitted from the GPS satellites and the computed pseudorange and phase values are used to calculate Total Electron Content (TEC). Computed TEC data and the tomographic reconstruction algorithms are used together to obtain tomographic images of electron density. In this study, a set of basis functions and reconstruction algorithms are used to investigate best fitting two dimensional tomographic images of ionosphere electron density in height and latitude for one satellite and one receiver pair. Results are compared to IRI-95 ionosphere model and both receiver and model errors are neglected.Item Open Access Algorithms for effective querying of compound graph-based pathway databases(BioMed Central Ltd., 2009-11-16) Doğrusöz, Uğur; Çetintaş, Ahmet; Demir, Emek; Babur, ÖzgünBackground: Graph-based pathway ontologies and databases are widely used to represent data about cellular processes. This representation makes it possible to programmatically integrate cellular networks and to investigate them using the well-understood concepts of graph theory in order to predict their structural and dynamic properties. An extension of this graph representation, namely hierarchically structured or compound graphs, in which a member of a biological network may recursively contain a sub-network of a somehow logically similar group of biological objects, provides many additional benefits for analysis of biological pathways, including reduction of complexity by decomposition into distinct components or modules. In this regard, it is essential to effectively query such integrated large compound networks to extract the sub-networks of interest with the help of efficient algorithms and software tools. Results: Towards this goal, we developed a querying framework, along with a number of graph-theoretic algorithms from simple neighborhood queries to shortest paths to feedback loops, that is applicable to all sorts of graph-based pathway databases, from PPIs (protein-protein interactions) to metabolic and signaling pathways. The framework is unique in that it can account for compound or nested structures and ubiquitous entities present in the pathway data. In addition, the queries may be related to each other through "AND" and "OR" operators, and can be recursively organized into a tree, in which the result of one query might be a source and/or target for another, to form more complex queries. The algorithms were implemented within the querying component of a new version of the software tool PATIKAweb (Pathway Analysis Tool for Integration and Knowledge Acquisition) and have proven useful for answering a number of biologically significant questions for large graph-based pathway databases. Conclusion: The PATIKA Project Web site is http://www.patika.org. PATIKAweb version 2.1 is available at http://web.patika.org. © 2009 Dogrusoz et al; licensee BioMed Central Ltd.Item Open Access Algorithms for efficient vectorization of repeated sparse power system network computations(IEEE, 1995) Aykanat, Cevdet; Özgü, Ö.; Güven, N.Standard sparsity-based algorithms used in power system appllcations need to be restructured for efficient vectorization due to the extremely short vectors processed. Further, intrinsic architectural features of vector computers such as chaining and sectioning should also be exploited for utmost performance. This paper presents novel data storage schemes and vectorization alsorim that resolve the recurrence problem, exploit chaining and minimize the number of indirect element selections in the repeated solution of sparse linear system of equations widely encountered in various power system problems. The proposed schemes are also applied and experimented for the vectorization of power mismatch calculations arising in the solution phase of FDLF which involves typical repeated sparse power network computations. The relative performances of the proposed and existing vectorization schemes are evaluated, both theoretically and experimentally on IBM 3090ArF.Item Open Access Algorithms for layout of disconnected graphs(Elsevier, 2000) Doğrusöz, UğurWe present efficient algorithms for the layout of disconnected objects in a graph (isolated nodes and components) for a specified aspect ratio. These linear and near-linear algorithms are based on alternate-bisection and two-dimensional packing methodologies. In addition, the parameters that affect the performance of these algorithms as well as the circumstances under which the two methodologies perform well are analyzed.Item Open Access Algorithms for on-line vertex enumeration problem(2017-09) Kaya, İrfan CanerVertex enumeration problem is to enumerate all vertices of a polyhedron P which is given by intersection of finitely many halfspaces. It is a basis for many algorithms designed to solve problems from various application areas and there are many algorithms to solve these problems in the literature. On the one hand, there are iterative algorithms which solve the so called 'on-line' vertex enumeration problem in each iteration. In other words, in each iteration of these algorithms, the current polyhedron is intersected with an additional halfspace that defines P. On the other hand, there are simplex-type algorithms which takes the set off all halfspaces as its input from the beginning. One of the usages of the vertex enumeration problem is the Benson-type multiobjective optimization algorithms. The aim of these algorithms is to generate or approximate the Pareto frontier (the set of nondominated points in the objective space). In each iteration of the Benson's algorithm, a polyhedron which contains the Pareto frontier is intersected with an additional halfspace in order tofind a finer outer approximation. The vertex enumeration problem to be used within this algorithm has a special structure. Namely, the polyhedron to be generated is known to be unbounded with a recession cone which is equal to the positive orthant. In this thesis, we consider the double description method which is a method to solve an on-line vertex enumeration problem where the starting polyhedron is bounded. (1) We generate an iterative algorithm to solve the vertex enumeration problem from the scratch where polyhedron P is allowed to be bounded or unbounded. (2) Then, we slightly modify this algorithm to be more efficient while it only works for problems where the recession cone of P is known to be the positive orthant. (3) Finally, we generate an additional algorithm for these problems. For this one, we modify the double description method such that it uses the extreme directions of the recession cone more effectively. We provide an illustrative example to explain the algorithms in detail. We implement the algorithms using MATLAB; employ each of them as a function of a Benson-type multiobjective optimization algorithm; and test the performances of the algorithms for randomly generated linear multiobjective optimization problems. Accordingly, for two dimensional problems, there is no clear distinction between the run-time performances of these algorithms. However, as the dimension of the vertex enumeration problem increases, the last algorithm (Algorithm 3) gets more efficient than the others.Item Open Access Algorithms for sink mobility in wireless sensor networks to improve network lifetime(Springer, 2012-09) Koç, Metin; Körpeoğlu, İbrahimSink mobility is an effective solution in the literature for wireless sensor network lifetime improvement. In this paper, we propose a set of algorithms for sink site determination (SSD) and movement strategy problems of sink mobility. We also present experiment results that compare the performance of our algorithms with other approaches in the literature. © 2012 Springer-Verlag London Limited.Item Open Access Algorithms for within-cluster searches using inverted files(Springer, 2006-11) Altıngövde, İsmail Şengör; Can, Fazlı; Ulusoy, ÖzgürInformation retrieval over clustered document collections has two successive stages: first identifying the best-clusters and then the best-documents in these clusters that are most similar to the user query. In this paper, we assume that an inverted file over the entire document collection is used for the latter stage. We propose and evaluate algorithms for within-cluster searches, i.e., to integrate the best-clusters with the best-documents to obtain the final output including the highest ranked documents only from the best-clusters. Our experiments on a TREC collection including 210,158 documents with several query sets show that an appropriately selected integration algorithm based on the query length and system resources can significantly improve the query evaluation efficiency. © Springer-Verlag Berlin Heidelberg 2006.Item Open Access Alignment of uncalibrated images for multi-view classification(IEEE, 2011) Arık, Sercan Ömer; Vuraf, E.; Frossard P.Efficient solutions for the classification of multi-view images can be built on graph-based algorithms when little information is known about the scene or cameras. Such methods typically require a pair-wise similarity measure between images, where a common choice is the Euclidean distance. However, the accuracy of the Euclidean distance as a similarity measure is restricted to cases where images are captured from nearby viewpoints. In settings with large transformations and viewpoint changes, alignment of images is necessary prior to distance computation. We propose a method for the registration of uncalibrated images that capture the same 3D scene or object. We model the depth map of the scene as an algebraic surface, which yields a warp model in the form of a rational function between image pairs. The warp model is computed by minimizing the registration error, where the registered image is a weighted combination of two images generated with two different warp functions estimated from feature matches and image intensity functions in order to provide robust registration. We demonstrate the flexibility of our alignment method by experimentation on several wide-baseline image pairs with arbitrary scene geometries and texture levels. Moreover, the results on multi-view image classification suggest that the proposed alignment method can be effectively used in graph-based classification algorithms for the computation of pairwise distances where it achieves significant improvements over distance computation without prior alignment. © 2011 IEEE.