Increasing data reuse in parallel sparse matrix-vector and matrix-transpose-vector multiply on shared-memory architectures

buir.advisorAykanat, Cevdet
dc.contributor.authorKarsavuran, Mustafa Ozan
dc.date.accessioned2016-01-08T20:18:19Z
dc.date.available2016-01-08T20:18:19Z
dc.date.issued2014
dc.descriptionAnkara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2014.en_US
dc.descriptionThesis (Master's) -- Bilkent University, 2014.en_US
dc.descriptionIncludes bibliographical references leaves 44-48.en_US
dc.description.abstractSparse matrix-vector and matrix-transpose-vector multiplications (Sparse AAT x) are the kernel operations used in iterative solvers. Sparsity pattern of the input matrix A, as well as its transpose, remains the same throughout the iterations. CPU cache could not be used properly during these Sparse AAT x operations due to irregular sparsity pattern of the matrix. We propose two parallelization strategies for Sparse AAT x. Our methods partition A matrix in order to exploit cache locality for matrix nonzeros and vector entries. We conduct experiments on the recently-released Intel R Xeon PhiTM coprocessor involving large variety of sparse matrices. Experimental results show that proposed methods achieve higher performance improvement than the state-of-the-art methods in the literature.en_US
dc.description.provenanceMade available in DSpace on 2016-01-08T20:18:19Z (GMT). No. of bitstreams: 1 1.pdf: 78510 bytes, checksum: d85492f20c2362aa2bcf4aad49380397 (MD5)en
dc.description.statementofresponsibilityKarsavuran, Mustafa Ozanen_US
dc.embargo.release2016-09-05
dc.format.extentx, 48 leaves, graphicsen_US
dc.identifier.itemidB148325
dc.identifier.urihttp://hdl.handle.net/11693/18330
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectIntel Many Integrated Core Architecture (Intel MIC)en_US
dc.subjectIntel Xeon Phien_US
dc.subjectCache Localityen_US
dc.subjectSparse Matrixen_US
dc.subjectSparse Matrix-Vector Multiplicationen_US
dc.subjectSparse Matrix-Vector and Matrix-Transpose-Vector Multiplicationen_US
dc.subjectHypergraph Modelen_US
dc.subjectHypergraph Partitioningen_US
dc.subject.lccQA76.88 .K37 2014en_US
dc.subject.lcshComputer architecture.en_US
dc.subject.lcshHigh performance computing.en_US
dc.subject.lcshDistributed shared memory.en_US
dc.subject.lcshComputer programming.en_US
dc.titleIncreasing data reuse in parallel sparse matrix-vector and matrix-transpose-vector multiply on shared-memory architecturesen_US
dc.title.alternativePaylaşılan bellek mimarisinde gerçekleştirilen paralel seyrek matris-vektör ve devrik-matris-vektör çarpımında veri yeniden kullanımını arttırmaken_US
dc.typeThesisen_US
thesis.degree.disciplineComputer Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
10049370.pdf.pdf
Size:
2.01 MB
Format:
Adobe Portable Document Format
Description:
Full printable version