Browsing by Subject "General sparse matrixmatrix multiplication"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Open Access Simultaneous computational and data load balancing in distributed-memory setting(SIAM, 2022) Çeliktuğ, Mestan Fırat; Karsavuran, M. Ozan; Acer, Seher; Aykanat, Cevdet; Sterck, Hans DeSeveral successful partitioning models and methods have been proposed and used for computational load balancing of irregularly sparse applications in a distributed-memory setting. However, the literature lacks partitioning models and methods that encode both computational and data load balancing. In this article, we try to close this gap in the literature by proposing two hypergraph partitioning (HP) models which simultaneously encode computational and data load balancing. Both models utilize a two-constraint formulation, where the first constraint encodes the computational loads and the second constraint encodes the data loads. In the first model, we introduce explicit data vertices for encoding data load and we replicate those data vertices at each recursive bipartitioning (RB) step for encoding data replication. In the second model, we introduce a data weight distribution scheme for encoding data load and we update those weights at each RB step. The nice property of both proposed models is that they do not necessitate developing a new partitioner from scratch. Both models can easily be implemented by invoking any HP tool that supports multiconstraint partitioning as a two-way partitioner at each RB step. The validity of the proposed models are tested on two widely used irregularly sparse applications: parallel mesh simulations and parallel sparse matrix sparse matrix multiplication. Both proposed models achieve significant improvement over a baseline model.