A novel method for scaling iterative solvers: avoiding latency overhead of parallel sparse-matrix vector multiplies
Selvitopi, R. O.
Ozdal, M. M.
IEEE Transactions on Parallel and Distributed Systems
Institute of Electrical and Electronics Engineers
632 - 645
Item Usage Stats
MetadataShow full item record
In parallel linear iterative solvers, sparse matrix vector multiplication (SpMxV) incurs irregular point-to-point (P2P) communications, whereas inner product computations incur regular collective communications. These P2P communications cause an additional synchronization point with relatively high message latency costs due to small message sizes. In these solvers, each SpMxV is usually followed by an inner product computation that involves the output vector of SpMxV. Here, we exploit this property to propose a novel parallelization method that avoids the latency costs and synchronization overhead of P2P communications. Our method involves a computational and a communication rearrangement scheme. The computational rearrangement provides an alternative method for forming input vector of SpMxV and allows P2P and collective communications to be performed in a single phase. The communication rearrangement realizes this opportunity by embedding P2P communications into global collective communication operations. The proposed method grants a certain value on the maximum number of messages communicated regardless of the sparsity pattern of the matrix. The downside, however, is the increased message volume and the negligible redundant computation. We favor reducing the message latency costs at the expense of increasing message volume. Yet, we propose two iterative-improvement-based heuristics to alleviate the increase in the volume through one-to-one task-to-processor mapping. Our experiments on two supercomputers, Cray XE6 and IBM BlueGene/Q, up to 2,048 processors show that the proposed parallelization method exhibits superior scalable performance compared to the conventional parallelization method.
Inner product computation
Iterative improvement heuristic
Message latency overhead
Conjugate gradient method
Parallel processing systems
Sparse matrix-vector multiplication
Published Version (Please cite this version)http://dx.doi.org/10.1109/TPDS.2014.2311804
Showing items related by title, author, creator and subject.
Onsori, Salman; Asad, Arghavan A; Raahemifar, K.; Fathy, M. (IEEE, 2016-01)In this article, we present a convex optimization model to design a three dimension (3D)stacked hybrid memory system to improve performance in the dark silicon era. Our convex model optimizes numbers and placement of static ...
Acer, Seher; Selvitopi, Oğuz; Aykanat, Cevdet (Springer, 2017-08-09)The scalability of sparse matrix-vector multiplication (SpMV) on distributed memory systems depends on multiple factors that involve different communication cost metrics. The irregular sparsity pattern of the coefficient ...
Aktürk, İsmail; Öztürk, Özcan (ACM, 2014-06)The full potential of chip multiprocessors remains unex- ploited due to the thread oblivious memory access sched- ulers used in off-chip main memory controllers. This is especially pronounced in embedded systems due to ...