Partitioning models for general medium-grain parallel sparse tensor decomposition

buir.contributor.authorKarsavuran, M. Ozan
buir.contributor.authorAykanat, Cevdet
buir.contributor.orcidKarsavuran, M. Ozan|0000-0002-0298-3034
buir.contributor.orcidAykanat, Cevdet|0000-0002-4559-1321
dc.citation.epage159en_US
dc.citation.issueNumber1en_US
dc.citation.spage147en_US
dc.citation.volumeNumber32en_US
dc.contributor.authorKarsavuran, M. Ozan
dc.contributor.authorAcer, S.
dc.contributor.authorAykanat, Cevdet
dc.date.accessioned2022-01-31T11:08:41Z
dc.date.available2022-01-31T11:08:41Z
dc.date.issued2021
dc.departmentDepartment of Computer Engineeringen_US
dc.description.abstractThe focus of this article is efficient parallelization of the canonical polyadic decomposition algorithm utilizing the alternating least squares method for sparse tensors on distributed-memory architectures. We propose a hypergraph model for general medium-grain partitioning which does not enforce any topological constraint on the partitioning. The proposed model is based on splitting the given tensor into nonzero-disjoint component tensors. Then a mode-dependent coarse-grain hypergraph is constructed for each component tensor. A net amalgamation operation is proposed to form a composite medium-grain hypergraph from these mode-dependent coarse-grain hypergraphs to correctly encapsulate the minimization of the communication volume. We propose a heuristic which splits the nonzeros of dense slices to obtain sparse slices in component tensors. So we partially attain slice coherency at (sub)slice level since partitioning is performed on (sub)slices instead of individual nonzeros. We also utilize the well-known recursive-bipartitioning framework to improve the quality of the splitting heuristic. Finally, we propose a medium-grain tripartite graph model with the aim of a faster partitioning at the expense of increasing the total communication volume. Parallel experiments conducted on 10 real-world tensors on up to 1024 processors confirm the validity of the proposed hypergraph and graph models.en_US
dc.description.provenanceSubmitted by Evrim Ergin (eergin@bilkent.edu.tr) on 2022-01-31T11:08:41Z No. of bitstreams: 1 Partitioning_models_for_general_medium-grain_parallel_sparse_tensor_decomposition.pdf: 1674695 bytes, checksum: fe653f997c8926fad9a4fd388286fffe (MD5)en
dc.description.provenanceMade available in DSpace on 2022-01-31T11:08:41Z (GMT). No. of bitstreams: 1 Partitioning_models_for_general_medium-grain_parallel_sparse_tensor_decomposition.pdf: 1674695 bytes, checksum: fe653f997c8926fad9a4fd388286fffe (MD5) Previous issue date: 2021en
dc.identifier.doi10.1109/TPDS.2020.3012624en_US
dc.identifier.eissn1558-2183
dc.identifier.issn1045-9219
dc.identifier.urihttp://hdl.handle.net/11693/76912
dc.language.isoEnglishen_US
dc.publisherIEEEen_US
dc.relation.isversionofhttps://doi.org/10.1109/TPDS.2020.3012624en_US
dc.source.titleIEEE Transactions on Parallel and Distributed Systemsen_US
dc.subjectSparse tensoren_US
dc.subjectTensor decompositionen_US
dc.subjectCanonical polyadic decompositionen_US
dc.subjectCommunication costen_US
dc.subjectCommunication volumeen_US
dc.subjectMedium-grain partitioningen_US
dc.subjectRecursive bipartitioningen_US
dc.subjectHypergraph partitioningen_US
dc.subjectGraph partitioningen_US
dc.titlePartitioning models for general medium-grain parallel sparse tensor decompositionen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Partitioning_models_for_general_medium-grain_parallel_sparse_tensor_decomposition.pdf
Size:
1.6 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: