Parallelization of Sparse Matrix Kernels for big data applications

buir.contributor.authorAykanat, Cevdet
dc.citation.epage382en_US
dc.citation.spage367en_US
dc.contributor.authorSelvitopu, Oğuzen_US
dc.contributor.authorAkbudak, Kadiren_US
dc.contributor.authorAykanat, Cevdeten_US
dc.contributor.editorPop, F.
dc.contributor.editorKołodziej, J.
dc.contributor.editorDi Martino, B.
dc.date.accessioned2019-05-30T06:56:50Z
dc.date.available2019-05-30T06:56:50Z
dc.date.issued2016en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionChapter 17
dc.description.abstractAnalysis of big data on large-scale distributed systems often necessitates efficient parallel graph algorithms that are used to explore the relationships between individual components. Graph algorithms use the basic adjacency list representation for graphs, which can also be viewed as a sparse matrix. This correspondence between representation of graphs and sparse matrices makes it possible to express many important graph algorithms in terms of basic sparse matrix operations, where the literature for optimization is more mature. For example, the graph analytic libraries such as Pegasus and Combinatorial BLAS use sparse matrix kernels for a wide variety of operations on graphs. In this work, we focus on two such important sparse matrix kernels: Sparse matrix–sparse matrix multiplication (SpGEMM) and sparse matrix–dense matrix multiplication (SpMM). We propose partitioning models for efficient parallelization of these kernels on large-scale distributed systems. Our models aim at reducing and improving communication volume while balancing computational load, which are two vital performance metrics on distributed systems. We show that by exploiting sparsity patterns of the matrices through our models, the parallel performance of SpGEMM and SpMM operations can be significantly improved.en_US
dc.description.provenanceSubmitted by Evrim Ergin (eergin@bilkent.edu.tr) on 2019-05-30T06:56:50Z No. of bitstreams: 1 Parallelization_of_sparse_matrix_kernels_for_big_data_applications.pdf: 540870 bytes, checksum: b7e87ac2a0c67ea8c0342d23be83cedf (MD5)en
dc.description.provenanceMade available in DSpace on 2019-05-30T06:56:50Z (GMT). No. of bitstreams: 1 Parallelization_of_sparse_matrix_kernels_for_big_data_applications.pdf: 540870 bytes, checksum: b7e87ac2a0c67ea8c0342d23be83cedf (MD5) Previous issue date: 2016en
dc.identifier.doi10.1007/978-3-319-44881-7_17en_US
dc.identifier.eisbn9783319448817
dc.identifier.isbn9783319448800
dc.identifier.urihttp://hdl.handle.net/11693/51953
dc.language.isoEnglishen_US
dc.publisherSpringeren_US
dc.relation.ispartofResource management for big data platforms: algorithms, modelling, and high-performance computing techniquesen_US
dc.relation.ispartofseriesComputer Communications and Networks;
dc.relation.isversionofhttps://doi.org/10.1007/978-3-319-44881-7_17en_US
dc.relation.isversionofhttps://doi.org/10.1007/978-3-319-44881-7en_US
dc.subjectBig dataen_US
dc.subjectGraph analyticsen_US
dc.subjectSparse matricesen_US
dc.subjectParallel computingen_US
dc.subjectHigh performance computingen_US
dc.subjectCombinatorial scientific computingen_US
dc.titleParallelization of Sparse Matrix Kernels for big data applicationsen_US
dc.typeBook Chapteren_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Parallelization_of_sparse_matrix_kernels_for_big_data_applications.pdf
Size:
528.19 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: