Acer, Seher2017-09-082017-09-082017-082017-082017-09-07http://hdl.handle.net/11693/33583Cataloged from PDF version of articleThesis (Ph.D.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2017.Includes bibliographical references (leaves 144-151).Sparse matrix computations are among the most important building blocks of linear algebra and arise in many scienti c and engineering problems. Depending on the problem type, these computations may be in the form of sparse matrix dense matrix multiplication (SpMM), sparse matrix vector multiplication (SpMV), or factorization of a sparse symmetric matrix. For both SpMM and SpMV performed on distributed-memory architectures, the associated data and task partitions among processors a ect the parallel performance in a great extent, especially for the sparse matrices with an irregular sparsity pattern. Parallel SpMM is characterized by high volumes of data communicated among processors, whereas both the volume and number of messages are important for parallel SpMV. For the factorization performed in envelope methods, the envelope size (i.e., pro le) is an important factor which determines the performance. For improving the performance in each of these sparse matrix computations, we propose graph/hypergraph partitioning models that exploit the advantages provided by the recursive bipartitioning (RB) paradigm in order to meet the speci c needs of the respective computation. In the models proposed for SpMM and SpMV, we utilize the RB process to enable targeting multiple volume-based communication cost metrics and the combination of volume- and number-based communication cost metrics in their partitioning objectives, respectively. In the model proposed for the factorization in envelope methods, the input matrix is reordered by utilizing the RB process in which two new quality metrics relating to pro le minimization are de ned and maintained. The experimantal results show that the proposed RB-based approach outperforms the state-of-the-art for each mentioned computation.xv, 151 leaves : charts (some color) ; 30 cmEnglishinfo:eu-repo/semantics/openAccessSparse matricesRecursive bipartitioningGraph partitioningHypergraph partitioningDistributed-memory architecturesCommunication costEnvelope methodsFactorizationPro le reductionRecursive bipartitioning models for performance improvement in sparse matrix computationsSeyrek matris hesaplamalarında performans iyileşmesi için özyinelemeli ikiye bölümleme modelleriThesisB156134