Stochastic Gradient Descent for matrix completion: hybrid parallelization on shared- and distributed-memory systems

buir.contributor.authorBüyükkaya , Kemal
buir.contributor.authorKarsavuran, M. Ozan
buir.contributor.authorAykanat, Cevdet
buir.contributor.orcidBüyükkaya, Kemal|0000-0002-8135-9917
buir.contributor.orcidKarsavuran, M. Ozan|0000-0002-0298-3034
buir.contributor.orcidAykanat, Cevdet|0000-0002-4559-1321
dc.citation.epage111176-12en_US
dc.citation.spage111176-1
dc.citation.volumeNumber283
dc.contributor.authorBüyükkaya, Kemal
dc.contributor.authorKarsavuran, M. Ozan
dc.contributor.authorAykanat, Cevdet
dc.date.accessioned2024-03-08T18:32:41Z
dc.date.available2024-03-08T18:32:41Z
dc.date.issued2024-01-11
dc.departmentDepartment of Computer Engineering
dc.description.abstractThe purpose of this study is to investigate the hybrid parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix completion problem on a high-performance computing platform. We propose a hybrid parallel decentralized SGD framework with asynchronous inter-process communication and a novel flexible partitioning scheme to attain scalability up to hundreds of processors. We utilize Message Passing Interface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using different real-world benchmark datasets. Experimental results on a hybrid parallel architecture showed that, compared to the state-of-the-art, the proposed algorithm achieves 6x higher throughput on sparse datasets, while it achieves comparable throughput on relatively dense datasets.
dc.description.provenanceMade available in DSpace on 2024-03-08T18:32:41Z (GMT). No. of bitstreams: 1 Stochastic_Gradient_Descent_for_matrix_completion_Hybrid_parallelization_on_shared-_and_distributed-memory_systems.pdf: 1891906 bytes, checksum: 85d359f319f219cb6244196f56da4f71 (MD5) Previous issue date: 2024-01en
dc.identifier.doi10.1016/j.knosys.2023.111176
dc.identifier.eissn1872-7409
dc.identifier.issn0950-7051
dc.identifier.urihttps://hdl.handle.net/11693/114431
dc.language.isoen
dc.publisherELSEVIER BV
dc.relation.isversionofhttps://doi.org/10.1016/j.knosys.2023.111176
dc.rightsCC BY 4.0 DEED (Attribution 4.0 International)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleKnowledge-Based Systems
dc.subjectStochastic gradient descent
dc.subjectmatrix completion
dc.subjectcollaborative filtering
dc.subjectmatrix factorization
dc.subjectdistributed-memory systems
dc.subjectshared-memory systems
dc.subjecthybrid parallelism
dc.titleStochastic Gradient Descent for matrix completion: hybrid parallelization on shared- and distributed-memory systems
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Stochastic_Gradient_Descent_for_matrix_completion_Hybrid_parallelization_on_shared-_and_distributed-memory_systems.pdf
Size:
1.8 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.01 KB
Format:
Item-specific license agreed upon to submission
Description: