Hybrid parallelization of Stochastic Gradient Descent
buir.advisor | Aykanat, Cevdet | |
dc.contributor.author | Büyükkaya, Kemal | |
dc.date.accessioned | 2022-02-22T05:19:04Z | |
dc.date.available | 2022-02-22T05:19:04Z | |
dc.date.copyright | 2022-02 | |
dc.date.issued | 2022-02 | |
dc.date.submitted | 2022-02-03 | |
dc.description | Cataloged from PDF version of article. | en_US |
dc.description | Thesis (Master's): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2022. | en_US |
dc.description | Includes bibliographical references (leaves 33-35). | en_US |
dc.description.abstract | The purpose of this study is to investigate the efficient parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix comple-tion problem on a high-performance computing (HPC) platform in distributed memory setting. We propose a hybrid parallel decentralized SGD framework with asynchronous communication between processors to show the scalability of parallel SGD up to hundreds of processors. We utilize Message Passing In-terface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using four different real-world benchmark datasets. Experimental results show that the proposed algorithm yields up to 6× better throughput on relatively sparse datasets, and displays comparable perfor-mance to available state-of-the-art algorithms on relatively dense datasets while providing a flexible partitioning scheme and a highly scalable hybrid parallel ar-chitecture. | en_US |
dc.description.provenance | Submitted by Betül Özen (ozen@bilkent.edu.tr) on 2022-02-22T05:19:04Z No. of bitstreams: 1 B160770.pdf: 775719 bytes, checksum: 9b4264bc750cde39024eb7231a43ef86 (MD5) | en |
dc.description.provenance | Made available in DSpace on 2022-02-22T05:19:04Z (GMT). No. of bitstreams: 1 B160770.pdf: 775719 bytes, checksum: 9b4264bc750cde39024eb7231a43ef86 (MD5) Previous issue date: 2022-02 | en |
dc.description.statementofresponsibility | by Kemal Büyükkaya | en_US |
dc.embargo.release | 2022-08-03 | |
dc.format.extent | xii, 35 leaves : illustrations ; 30 cm. | en_US |
dc.identifier.itemid | B160770 | |
dc.identifier.uri | http://hdl.handle.net/11693/77541 | |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Stochastic gradient descent | en_US |
dc.subject | Matrix completion | en_US |
dc.subject | Matrix factorization | en_US |
dc.subject | Parallel distributed memory system | en_US |
dc.title | Hybrid parallelization of Stochastic Gradient Descent | en_US |
dc.title.alternative | Olasılıksal Gradyan Alçalmanın hibrit paralelleştirilmesi | en_US |
dc.type | Thesis | en_US |
thesis.degree.discipline | Computer Engineering | |
thesis.degree.grantor | Bilkent University | |
thesis.degree.level | Master's | |
thesis.degree.name | MS (Master of Science) |