Scaling stratified stochastic gradient descent for distributed matrix completion

buir.contributor.authorAbubaker , Nabil
buir.contributor.authorAykanat, Cevdet
buir.contributor.orcidAbubaker, Nabil|0000-0002-5060-3059
buir.contributor.orcidAykanat, Cevdet|0000-0002-4559-1321
dc.citation.epage10615en_US
dc.citation.issueNumber10
dc.citation.spage10603
dc.citation.volumeNumber35
dc.contributor.authorAbubaker, Nabil
dc.contributor.authorKarsavuran, M. O.
dc.contributor.authorAykanat, Cevdet
dc.date.accessioned2024-03-18T14:10:02Z
dc.date.available2024-03-18T14:10:02Z
dc.date.issued2023-10-01
dc.departmentDepartment of Computer Engineering
dc.description.abstractStratified SGD (SSGD) is the primary approach for achieving serializable parallel SGD for matrix completion. State-of-the-art parallelizations of SSGD fail to scale due to large communication overhead. During an SGD epoch, these methods send data proportional to one of the dimensions of the rating matrix. We propose a framework for scalable SSGD through significantly reducing the communication overhead via exchanging point-to-point messages utilizing the sparsity of the rating matrix. We provide formulas to represent the essential communication for correctly performing parallel SSGD and we propose a dynamic programming algorithm for efficiently computing them to establish the point-to-point message schedules. This scheme, however, significantly increases the number of messages sent by a processor per epoch from O(K) to (K2) for a K-processor system which might limit the scalability. To remedy this, we propose a Hold-and-Combine strategy to limit the upper-bound on the number of messages sent per processor to O(KlgK). We also propose a hypergraph partitioning model that correctly encapsulates reducing the communication volume. Experimental results show that the framework successfully achieves a scalable distributed SSGD through significantly reducing the communication overhead. Our code is publicly available at: github.com/nfabubaker/CESSGD
dc.description.provenanceMade available in DSpace on 2024-03-18T14:10:02Z (GMT). No. of bitstreams: 1 Scaling_stratified_stochastic_gradient_descent_for_distributed_matrix_completion.pdf: 1695216 bytes, checksum: f7959910441a0a141f8f04b2da21d575 (MD5) Previous issue date: 2023-10-01en
dc.identifier.doi10.1109/TKDE.2023.3253791en_US
dc.identifier.eissn1558-2191en_US
dc.identifier.issn1041-4347en_US
dc.identifier.urihttps://hdl.handle.net/11693/114918en_US
dc.language.isoEnglishen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.isversionofhttps://dx.doi.org/10.1109/TKDE.2023.3253791
dc.source.titleIEEE Transactions on Knowledge and Data Engineering
dc.subjectBandwidth cost
dc.subjectCombinatorial algorithms
dc.subjectCommunication cost minimization
dc.subjectCollaborative filtering
dc.subjectHPC
dc.subjectHypergraph partitioning
dc.subjectLatency cost
dc.subjectMatrix completion
dc.subjectRecommender systems
dc.subjectSGD
dc.titleScaling stratified stochastic gradient descent for distributed matrix completion
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Scaling_stratified_stochastic_gradient_descent_for_distributed_matrix_completion.pdf
Size:
1.58 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.01 KB
Format:
Item-specific license agreed upon to submission
Description: