dc.contributor.advisor | Aykanat, Cevdet | |
dc.contributor.author | Akbudak, Kadir | |
dc.date.accessioned | 2016-01-08T18:16:56Z | |
dc.date.available | 2016-01-08T18:16:56Z | |
dc.date.issued | 2009 | |
dc.identifier.uri | http://hdl.handle.net/11693/15336 | |
dc.description | Ankara : The Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 2009. | en_US |
dc.description | Thesis (Master's) -- Bilkent University, 2009. | en_US |
dc.description | Includes bibliographical references leaves 52-56. | en_US |
dc.description.abstract | The sparse matrix-vector multiplication (SpMxV) is an important kernel operation
widely used in linear solvers. The same sparse matrix is multiplied by a dense vector
repeatedly in these solvers to solve a system of linear equations. High performance
gains can be obtained if we can take the advantage of today’s deep cache hierarchy
in SpMxV operations. Matrices with irregular sparsity patterns make it difficult to
utilize data locality effectively in SpMxV computations. Different techniques are proposed
in the literature to utilize cache hierarchy effectively via exploiting data locality
during SpMxV. In this work, we investigate two distinct frameworks for cacheaware/oblivious
SpMxV: single matrix-vector multiply and multiple submatrix-vector
multiplies. For the single matrix-vector multiply framework, we propose a cache-size
aware top-down row/column-reordering approach based on 1D sparse matrix partitioning
by utilizing the recently proposed appropriate hypergraph models of sparse
matrices, and a cache oblivious bottom-up approach based on hierarchical clustering
of rows/columns with similar sparsity patterns. We also propose a column compression
scheme as a preprocessing step which makes these two approaches cache-line-size
aware. The multiple submatrix-vector multiplies framework depends on the partitioning
the matrix into multiple nonzero-disjoint submatrices. For an effective matrixto-submatrix
partitioning required in this framework, we propose a cache-size aware
top-down approach based on 2D sparse matrix partitioning by utilizing the recently
proposed fine-grain hypergraph model. For this framework, we also propose a traveling
salesman formulation for an effective ordering of individual submatrix-vector
multiply operations. We evaluate the validity of our models and methods on a wide
range of sparse matrices. Experimental results show that proposed methods and models
outperforms state-of-the-art schemes. | en_US |
dc.description.statementofresponsibility | Akbudak, Kadir | en_US |
dc.format.extent | xiv, 56 leaves | en_US |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Cache locality | en_US |
dc.subject | Sparse matrices | en_US |
dc.subject | Matrix-vector multiplication | en_US |
dc.subject | Matrix reordering | en_US |
dc.subject | Computational hypergraph model | en_US |
dc.subject | Hypergraph partitioning | en_US |
dc.subject | Traveling salesman problem | en_US |
dc.subject.lcc | QA188 .A53 2009 | en_US |
dc.subject.lcsh | Sparse matrices--Data processing. | en_US |
dc.subject.lcsh | Cache memory. | en_US |
dc.subject.lcsh | Hypergraphs. | en_US |
dc.title | Cache locality exploiting methods and models for sparse matrix-vector multiplication | en_US |
dc.type | Thesis | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.publisher | Bilkent University | en_US |
dc.description.degree | M.S. | en_US |
dc.identifier.itemid | B118111 | |