Cartesian partitioning models for 2D and 3D parallel SpGEMM algorithms
Date
2020Source Title
IEEE Transactions on Parallel and Distributed Systems
Print ISSN
1045-9219
Publisher
IEEE
Volume
31
Issue
12
Pages
2763 - 2775
Language
English
Type
ArticleItem Usage Stats
78
views
views
236
downloads
downloads
Abstract
The focus is distributed-memory parallelization of sparse-general-matrix-multiplication (SpGEMM). Parallel SpGEMM algorithms are classified under one-dimensional (1D), 2D, and 3D categories denoting the number of dimensions by which the 3D sparse workcube representing the iteration space of SpGEMM is partitioned. Recently proposed successful 2D- and 3D-parallel SpGEMM algorithms benefit from upper bounds on communication overheads enforced by 2D and 3D cartesian partitioning of the workcube on 2D and 3D virtual processor grids, respectively. However, these methods are based on random cartesian partitioning and do not utilize sparsity patterns of SpGEMM instances for reducing the communication overheads. We propose hypergraph models for 2D and 3D cartesian partitioning of the workcube for further reducing the communication overheads of these 2D- and 3D- parallel SpGEMM algorithms. The proposed models utilize two- and three-phase partitioning that exploit multi-constraint hypergraph partitioning formulations. Extensive experimentation performed on 20 SpGEMM instances by using upto 900 processors demonstrate that proposed partitioning models significantly improve the scalability of 2D and 3D algorithms. For example, in 2D-parallel SpGEMM algorithm on 900 processors, the proposed partitioning model respectively achieves 85 and 42 percent decrease in total volume and total number of messages, leading to 1.63 times higher speedup compared to random partitioning, on average.
Keywords
Sparse matrix-matrix multiplicationSpGEMM
Sparse SUMMA SpGEMM
Split-3D-SpGEMM
Hypergraph partitioning
Communication cost
Bandwidth
Latency