Cache locality exploiting methods and models for sparse matrix-vector multiplication

buir.advisorAykanat, Cevdet
dc.contributor.authorAkbudak, Kadir
dc.date.accessioned2016-01-08T18:16:56Z
dc.date.available2016-01-08T18:16:56Z
dc.date.issued2009
dc.descriptionAnkara : The Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 2009.en_US
dc.descriptionThesis (Master's) -- Bilkent University, 2009.en_US
dc.descriptionIncludes bibliographical references leaves 52-56.en_US
dc.description.abstractThe sparse matrix-vector multiplication (SpMxV) is an important kernel operation widely used in linear solvers. The same sparse matrix is multiplied by a dense vector repeatedly in these solvers to solve a system of linear equations. High performance gains can be obtained if we can take the advantage of today’s deep cache hierarchy in SpMxV operations. Matrices with irregular sparsity patterns make it difficult to utilize data locality effectively in SpMxV computations. Different techniques are proposed in the literature to utilize cache hierarchy effectively via exploiting data locality during SpMxV. In this work, we investigate two distinct frameworks for cacheaware/oblivious SpMxV: single matrix-vector multiply and multiple submatrix-vector multiplies. For the single matrix-vector multiply framework, we propose a cache-size aware top-down row/column-reordering approach based on 1D sparse matrix partitioning by utilizing the recently proposed appropriate hypergraph models of sparse matrices, and a cache oblivious bottom-up approach based on hierarchical clustering of rows/columns with similar sparsity patterns. We also propose a column compression scheme as a preprocessing step which makes these two approaches cache-line-size aware. The multiple submatrix-vector multiplies framework depends on the partitioning the matrix into multiple nonzero-disjoint submatrices. For an effective matrixto-submatrix partitioning required in this framework, we propose a cache-size aware top-down approach based on 2D sparse matrix partitioning by utilizing the recently proposed fine-grain hypergraph model. For this framework, we also propose a traveling salesman formulation for an effective ordering of individual submatrix-vector multiply operations. We evaluate the validity of our models and methods on a wide range of sparse matrices. Experimental results show that proposed methods and models outperforms state-of-the-art schemes.en_US
dc.description.provenanceMade available in DSpace on 2016-01-08T18:16:56Z (GMT). No. of bitstreams: 1 0006083.pdf: 7559127 bytes, checksum: aae14b3657ce043e35c51adb5b88f20c (MD5)en
dc.description.statementofresponsibilityAkbudak, Kadiren_US
dc.format.extentxiv, 56 leavesen_US
dc.identifier.itemidB118111
dc.identifier.urihttp://hdl.handle.net/11693/15336
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectCache localityen_US
dc.subjectSparse matricesen_US
dc.subjectMatrix-vector multiplicationen_US
dc.subjectMatrix reorderingen_US
dc.subjectComputational hypergraph modelen_US
dc.subjectHypergraph partitioningen_US
dc.subjectTraveling salesman problemen_US
dc.subject.lccQA188 .A53 2009en_US
dc.subject.lcshSparse matrices--Data processing.en_US
dc.subject.lcshCache memory.en_US
dc.subject.lcshHypergraphs.en_US
dc.titleCache locality exploiting methods and models for sparse matrix-vector multiplicationen_US
dc.typeThesisen_US
thesis.degree.disciplineComputer Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0006083.pdf
Size:
7.21 MB
Format:
Adobe Portable Document Format