Parallelization of Sparse Matrix Kernels for big data applications
buir.contributor.author | Aykanat, Cevdet | |
dc.citation.epage | 382 | en_US |
dc.citation.spage | 367 | en_US |
dc.contributor.author | Selvitopu, Oğuz | en_US |
dc.contributor.author | Akbudak, Kadir | en_US |
dc.contributor.author | Aykanat, Cevdet | en_US |
dc.contributor.editor | Pop, F. | |
dc.contributor.editor | Kołodziej, J. | |
dc.contributor.editor | Di Martino, B. | |
dc.date.accessioned | 2019-05-30T06:56:50Z | |
dc.date.available | 2019-05-30T06:56:50Z | |
dc.date.issued | 2016 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description | Chapter 17 | |
dc.description.abstract | Analysis of big data on large-scale distributed systems often necessitates efficient parallel graph algorithms that are used to explore the relationships between individual components. Graph algorithms use the basic adjacency list representation for graphs, which can also be viewed as a sparse matrix. This correspondence between representation of graphs and sparse matrices makes it possible to express many important graph algorithms in terms of basic sparse matrix operations, where the literature for optimization is more mature. For example, the graph analytic libraries such as Pegasus and Combinatorial BLAS use sparse matrix kernels for a wide variety of operations on graphs. In this work, we focus on two such important sparse matrix kernels: Sparse matrix–sparse matrix multiplication (SpGEMM) and sparse matrix–dense matrix multiplication (SpMM). We propose partitioning models for efficient parallelization of these kernels on large-scale distributed systems. Our models aim at reducing and improving communication volume while balancing computational load, which are two vital performance metrics on distributed systems. We show that by exploiting sparsity patterns of the matrices through our models, the parallel performance of SpGEMM and SpMM operations can be significantly improved. | en_US |
dc.description.provenance | Submitted by Evrim Ergin (eergin@bilkent.edu.tr) on 2019-05-30T06:56:50Z No. of bitstreams: 1 Parallelization_of_sparse_matrix_kernels_for_big_data_applications.pdf: 540870 bytes, checksum: b7e87ac2a0c67ea8c0342d23be83cedf (MD5) | en |
dc.description.provenance | Made available in DSpace on 2019-05-30T06:56:50Z (GMT). No. of bitstreams: 1 Parallelization_of_sparse_matrix_kernels_for_big_data_applications.pdf: 540870 bytes, checksum: b7e87ac2a0c67ea8c0342d23be83cedf (MD5) Previous issue date: 2016 | en |
dc.identifier.doi | 10.1007/978-3-319-44881-7_17 | en_US |
dc.identifier.eisbn | 9783319448817 | |
dc.identifier.isbn | 9783319448800 | |
dc.identifier.uri | http://hdl.handle.net/11693/51953 | |
dc.language.iso | English | en_US |
dc.publisher | Springer | en_US |
dc.relation.ispartof | Resource management for big data platforms: algorithms, modelling, and high-performance computing techniques | en_US |
dc.relation.ispartofseries | Computer Communications and Networks; | |
dc.relation.isversionof | https://doi.org/10.1007/978-3-319-44881-7_17 | en_US |
dc.relation.isversionof | https://doi.org/10.1007/978-3-319-44881-7 | en_US |
dc.subject | Big data | en_US |
dc.subject | Graph analytics | en_US |
dc.subject | Sparse matrices | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | High performance computing | en_US |
dc.subject | Combinatorial scientific computing | en_US |
dc.title | Parallelization of Sparse Matrix Kernels for big data applications | en_US |
dc.type | Book Chapter | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Parallelization_of_sparse_matrix_kernels_for_big_data_applications.pdf
- Size:
- 528.19 KB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: