Browsing by Subject "Hypergraphs."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Open Access Active set partitioning scheme for extending the lifetime of large wireless sensor networks(2010) Kalkan, MustafaWireless Sensor Networks consist of spatially distributed and energy-constrained autonomous devices called sensors to cooperatively monitor physical or environmental conditions such as temperature, sound, vibration, pressure or pollutants at different locations. Because these sensor nodes have limited energy supply, energy efficiency is a critical design issue in wireless sensor networks. Having all the nodes simultaneously work in the active mode, results in an excessive energy consumption and packet collisions because of high node density in the network. In order to minimize energy consumption and extend network life-time, this thesis presents a centralized graph partitioning approach to organize the sensor nodes into a number of active sensor node sets such that each active set maintains the desired level of sensing coverage and forms a connected network to perform sensing and communication tasks successfully. We evaluate our proposed scheme via simulations under different network topologies and parameters in terms of network lifetime and run-time efficiency and observe approximately 50% improvement in the number of obtained active node sets when compared with different active node set selection mechanisms.Item Open Access Balance preserving min-cut replication set for a K-way hypergraph partitioning(2010) Yazıcı, VolkanReplication is a widely used technique in information retrieval and database systems for providing fault-tolerance and reducing parallelization and processing costs. Combinatorial models based on hypergraph partitioning are proposed for various problems arising in information retrieval and database systems. We consider the possibility of using vertex replication to improve the quality of hypergraph partitioning. In this study, we focus on the Balance Preserving Min-Cut Replication Set (BPMCRS) problem, where we are initially given a maximum replication capacity and a K-way hypergraph partition with an initial imbalance ratio. The objective in the BPMCRS problem is finding optimal vertex replication sets for each part of the given partition such that the initial cutsize of the partition is improved as much as possible and the initial imbalance is either preserved or reduced under the given replication capacity constraint. In order to address the BPMCRS problem, we propose a model based on a unique blend of coarsening and integer linear programming (ILP) schemes. This coarsening algorithm is based on the Dulmage-Mendelsohn decomposition. Experiments show that the ILP formulation coupled with the Dulmage-Mendelsohn decomposition-based coarsening provides high quality results in feasible execution times for reducing the cost of a given K-way hypergraph partition.Item Open Access Cache locality exploiting methods and models for sparse matrix-vector multiplication(2009) Akbudak, KadirThe sparse matrix-vector multiplication (SpMxV) is an important kernel operation widely used in linear solvers. The same sparse matrix is multiplied by a dense vector repeatedly in these solvers to solve a system of linear equations. High performance gains can be obtained if we can take the advantage of today’s deep cache hierarchy in SpMxV operations. Matrices with irregular sparsity patterns make it difficult to utilize data locality effectively in SpMxV computations. Different techniques are proposed in the literature to utilize cache hierarchy effectively via exploiting data locality during SpMxV. In this work, we investigate two distinct frameworks for cacheaware/oblivious SpMxV: single matrix-vector multiply and multiple submatrix-vector multiplies. For the single matrix-vector multiply framework, we propose a cache-size aware top-down row/column-reordering approach based on 1D sparse matrix partitioning by utilizing the recently proposed appropriate hypergraph models of sparse matrices, and a cache oblivious bottom-up approach based on hierarchical clustering of rows/columns with similar sparsity patterns. We also propose a column compression scheme as a preprocessing step which makes these two approaches cache-line-size aware. The multiple submatrix-vector multiplies framework depends on the partitioning the matrix into multiple nonzero-disjoint submatrices. For an effective matrixto-submatrix partitioning required in this framework, we propose a cache-size aware top-down approach based on 2D sparse matrix partitioning by utilizing the recently proposed fine-grain hypergraph model. For this framework, we also propose a traveling salesman formulation for an effective ordering of individual submatrix-vector multiply operations. We evaluate the validity of our models and methods on a wide range of sparse matrices. Experimental results show that proposed methods and models outperforms state-of-the-art schemes.Item Open Access Combinatorial reductions between graph partitioning by vertex separator and hypergraph partitioning problems for parallel and scientific computing applications(2009) Kayaaslan, EnverColour as an effective design tool influences people’s emotions in interior spaces. Depending on the assumption that colour has an impact on human psychology, this study stresses the need for further studies that comprise colour and emotion association in interior space in order to provide healthier spaces for inhabitants. Emotional reactions to colour in a living room were investigated by using self report measure. Pure red, green and blue were chosen to be investigated as chromatic colours, whereas gray was the achromatic colour used as a control variable. The study was conducted at Bilkent University in Ankara, Turkey. Hundred and eighty people from various ages and academic departments participated in the study. Participants first watched a short video showing an overlook of a 3D model of a living room. Next, they were asked to match the distinct coloured living rooms with facial expressions of six basic emotions that covers anger, disgust, surprise, happiness, fear, sadness and in addition with neutral. The results of the study indicated that the most stated emotions associated for the room with red walls were disgust and happiness, while the least stated emotions were sadness, fear, anger, and surprise. Neutral and happiness were the most stated emotions for the room with green walls and anger, surprise, fear and sadness were the least stated ones. The most stated emotion associated for the room with blue walls was neutral, while the least stated emotions were anger and surprise. Neutral, disgust and sadness were the most stated emotions for the room with gray walls. Gender differences were not found in human emotional reactions to living rooms with different wall colours.Item Open Access Data distribution and performance optimization models for parallel data mining(2013) Özkural, ErayWe have embarked upon a multitude of approaches to improve the efficiency of selected fundamental tasks in data mining. The present thesis is concerned with improving the efficiency of parallel processing methods for large amounts of data. We have devised new parallel frequent itemset mining algorithms that work on both sparse and dense datasets, and 1-D and 2-D parallel algorithms for the all-pairs similarity problem. Two new parallel frequent itemset mining (FIM) algorithms named NoClique and NoClique2 parallelize our sequential vertical frequent itemset mining algorithm named bitdrill, and uses a method based on graph partitioning by vertex separator (GPVS) to distribute and selectively replicate items. The method operates on a graph where vertices correspond to frequent items and edges correspond to frequent itemsets of size two. We show that partitioning this graph by a vertex separator is sufficient to decide a distribution of the items such that the sub-databases determined by the item distribution can be mined independently. This distribution entails an amount of data replication, which may be reduced by setting appropriate weights to vertices. The data distribution scheme is used in the design of two new parallel frequent itemset mining algorithms. Both algorithms replicate the items that correspond to the separator. NoClique replicates the work induced by the separator and NoClique2 computes the same work collectively. Computational load balancing and minimization of redundant or collective work may be achieved by assigning appropriate load estimates to vertices. The performance is compared to another parallelization that replicates all items, and ParDCI algorithm. We introduce another parallel FIM method using a variation of item distribution with selective item replication. We extend the GPVS model for parallel FIM we have proposed earlier, by relaxing the condition of independent mining. Instead of finding independently mined item sets, we may minimize the amount of communication and partition the candidates in a fine-grained manner. We introduce a hypergraph partitioning model of the parallel computation where vertices correspond to candidates and hyperedges correspond to items. A load estimate is assigned to each candidate with vertex weights, and item frequencies are given as hyperedge weights. The model is shown to minimize data replication and balance load accurately. We also introduce a re-partitioning model since we can generate only so many levels of candidates at once, using fixed vertices to model previous item distribution/replication. Experiments show that we improve over the higher load imbalance of NoClique2 algorithm for the same problem instances at the cost of additional parallel overhead. For the all-pairs similarity problem, we extend recent efficient sequential algorithms to a parallel setting, and obtain document-wise and term-wise parallelizations of a fast sequential algorithm, as well as an elegant combination of two algorithms that yield a 2-D distribution of the data. Two effective algorithmic optimizations for the term-wise case are reported that make the term-wise parallelization feasible. These optimizations exploit local pruning and block processing of a number of vectors, in order to decrease communication costs, the number of candidates, and communication/computation imbalance. The correctness of local pruning is proven. Also, a recursive term-wise parallelization is introduced. The performance of the algorithms are shown to be favorable in extensive experiments, as well as the utility of two major optimizations.Item Open Access Hypergraph-based data partitioning(2013) Kayaaslan, EnverA hypergraph is a general version of graph where the edges may connect any number of vertices. By this flexibility, hypergraphs has a larger modeling power that may allow accurate formulaion of many problems of combinatorial scientific computing. This thesis discusses the use of hypergraph-based approaches to solve problems that require data partitioning. The thesis is composed of three parts. In the first part, we show how to implement hypergraph partitioning efficiently using recursive graph bipartitioning. The remaining two parts show how to formulate two important data partitioning problems in parallel computing as hypergraph partitioning. The first problem is global inverted index partitioning for parallel query processing and the second one is row-columnwise sparse matrix partitioning for parallel matrix vector multiplication, where both multiplication and sparse matrix partitioning schemes has novelty. In this thesis, we show that hypergraph models achieve partitions with better quality.Item Open Access A recursive graph bipartitioning algorithm by vertex separators with fixed vertices for permuting sparse matrices into block diagonal form with overlap(2011) Acer, SeherSolving sparse system of linear equations Ax=b using preconditioners can be effi- ciently parallelized using graph partitioning tools. In this thesis, we investigate the problem of permuting a sparse matrix into a block diagonal form with overlap which is to be used in the parallelization of the multiplicative schwarz preconditioner. A matrix is said to be in block diagonal form with overlap if the diagonal blocks may overlap. In order to formulate this permutation problem as a graph-theoretical problem, we introduce a restricted version of the graph partitioning by vertex separator problem (GPVS), where the objective is to find a vertex partition whose parts are only connected by a vertex separator. The modified problem, we refer as ordered GPVS problem (oGPVS), is restricted such that the parts should exhibit an ordered form where the consecutive parts can only be connected by a separator. The existing graph partitioning tools are unable to solve the oGPVS problem. Thus, we present a recursive graph bipartitioning algorithm by vertex separators together with a novel vertex fixation scheme so that a GPVS tool supporting fixed vertices can effectively and efficiently be utilized. We also theoretically verified the correctness of the proposed approach devising a necessary and sufficient condition to the feasibility of a oGPVS solution. Experimental results on a wide range of matrices confirm the validity of the proposed approach.Item Open Access Replicated hypergraph partitioning(2010) Selvitopi, Reha OğuzHypergraph partitioning is recently used in distributed information retrieval (IR) and spatial databases to correctly capture the communication and disk access costs. In the hypergraph models for these areas, the quality of the partitions obtained using hypergraph partitioning can be crucial for the objective of the targeted problem. Replication is a widely used terminology to address different performance issues in distributed IR and database systems. The main motivation behind replication is to improve the performance of the targeted issue at the cost of using more space. In this work, we focus on replicated hypergraph partitioning schemes that improve the quality of hypergraph partitioning by vertex replication. To this end, we propose a replicated partitioning scheme where replication and partitioning are performed in conjunction. Our approach utilizes successful multilevel and recursive bipartitioning methodologies for hypergraph partitioning. The replication is achieved in the uncoarsening phase of the multilevel methodology by extending the efficient Fiduccia-Mattheyses (FM) iterative improvement heuristic. We call this extended heuristic replicated FM (rFM). The proposed rFM heuristic supports move, replication and unreplication operations on the vertices by introducing new algorithms and vertex states. We show rFM has the same complexity as FM and integrate the proposed replication scheme into the multilevel hypergraph partitioning tool PaToH. We test the proposed replication scheme on realistic datasets and obtain promising results.