Acer, S.Selvitopi, O.Aykanat, Cevdet2019-02-212019-02-2120180743-7315http://hdl.handle.net/11693/49915For the parallelization of sparse matrix-vector multiplication (SpMV) on distributed memory systems, nonzero-based fine-grain and medium-grain partitioning models attain the lowest communication volume and computational imbalance among all partitioning models. This usually comes, however, at the expense of high message count, i.e., high latency overhead. This work addresses this shortcoming by proposing new fine-grain and medium-grain models that are able to minimize communication volume and message count in a single partitioning phase. The new models utilize message nets in order to encapsulate the minimization of total message count. We further fine-tune these models by proposing delayed addition and thresholding for message nets in order to establish a trade-off between the conflicting objectives of minimizing communication volume and message count. The experiments on an extensive dataset of nearly one thousand matrices show that the proposed models improve the total message count of the original nonzero-based models by up to 27% on the average, which is reflected on the parallel runtime of SpMV as an average reduction of 15% on 512 processors.EnglishCommunication overheadFine-grain partitioningHypergraphLoad balancingMedium-grain partitioningRecursive bipartitioningRow-column-parallel SpMVSparse matrixSparse matrix-vector multiplicationOptimizing nonzero-based sparse matrix partitioning models via reducing latencyArticle10.1016/j.jpdc.2018.08.005