Büyükkaya, Kemal2022-02-222022-02-222022-022022-022022-02-03http://hdl.handle.net/11693/77541Cataloged from PDF version of article.Thesis (Master's): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2022.Includes bibliographical references (leaves 33-35).The purpose of this study is to investigate the efficient parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix comple-tion problem on a high-performance computing (HPC) platform in distributed memory setting. We propose a hybrid parallel decentralized SGD framework with asynchronous communication between processors to show the scalability of parallel SGD up to hundreds of processors. We utilize Message Passing In-terface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using four different real-world benchmark datasets. Experimental results show that the proposed algorithm yields up to 6× better throughput on relatively sparse datasets, and displays comparable perfor-mance to available state-of-the-art algorithms on relatively dense datasets while providing a flexible partitioning scheme and a highly scalable hybrid parallel ar-chitecture.xii, 35 leaves : illustrations ; 30 cm.Englishinfo:eu-repo/semantics/openAccessStochastic gradient descentMatrix completionMatrix factorizationParallel distributed memory systemHybrid parallelization of Stochastic Gradient DescentOlasılıksal Gradyan Alçalmanın hibrit paralelleştirilmesiThesisB160770