Hybrid parallelization of Stochastic Gradient Descent

buir.advisorAykanat, Cevdet
dc.contributor.authorBüyükkaya, Kemal
dc.date.accessioned2022-02-22T05:19:04Z
dc.date.available2022-02-22T05:19:04Z
dc.date.copyright2022-02
dc.date.issued2022-02
dc.date.submitted2022-02-03
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (Master's): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2022.en_US
dc.descriptionIncludes bibliographical references (leaves 33-35).en_US
dc.description.abstractThe purpose of this study is to investigate the efficient parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix comple-tion problem on a high-performance computing (HPC) platform in distributed memory setting. We propose a hybrid parallel decentralized SGD framework with asynchronous communication between processors to show the scalability of parallel SGD up to hundreds of processors. We utilize Message Passing In-terface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using four different real-world benchmark datasets. Experimental results show that the proposed algorithm yields up to 6× better throughput on relatively sparse datasets, and displays comparable perfor-mance to available state-of-the-art algorithms on relatively dense datasets while providing a flexible partitioning scheme and a highly scalable hybrid parallel ar-chitecture.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2022-02-22T05:19:04Z No. of bitstreams: 1 B160770.pdf: 775719 bytes, checksum: 9b4264bc750cde39024eb7231a43ef86 (MD5)en
dc.description.provenanceMade available in DSpace on 2022-02-22T05:19:04Z (GMT). No. of bitstreams: 1 B160770.pdf: 775719 bytes, checksum: 9b4264bc750cde39024eb7231a43ef86 (MD5) Previous issue date: 2022-02en
dc.description.statementofresponsibilityby Kemal Büyükkayaen_US
dc.embargo.release2022-08-03
dc.format.extentxii, 35 leaves : illustrations ; 30 cm.en_US
dc.identifier.itemidB160770
dc.identifier.urihttp://hdl.handle.net/11693/77541
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectStochastic gradient descenten_US
dc.subjectMatrix completionen_US
dc.subjectMatrix factorizationen_US
dc.subjectParallel distributed memory systemen_US
dc.titleHybrid parallelization of Stochastic Gradient Descenten_US
dc.title.alternativeOlasılıksal Gradyan Alçalmanın hibrit paralelleştirilmesien_US
dc.typeThesisen_US
thesis.degree.disciplineComputer Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
B160770.pdf
Size:
757.54 KB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: