Stochastic Gradient Descent for matrix completion: hybrid parallelization on shared- and distributed-memory systems

Date

2024-01-11

Authors

Büyükkaya, Kemal
Karsavuran, M. Ozan
Aykanat, Cevdet

Editor(s)

Advisor

Supervisor

Co-Advisor

Co-Supervisor

Instructor

Source Title

Knowledge-Based Systems

Print ISSN

0950-7051

Electronic ISSN

1872-7409

Publisher

ELSEVIER BV

Volume

283

Issue

Pages

111176-1 - 111176-12

Language

en

Journal Title

Journal ISSN

Volume Title

Citation Stats
Attention Stats
Usage Stats
25
views
8
downloads

Series

Abstract

The purpose of this study is to investigate the hybrid parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix completion problem on a high-performance computing platform. We propose a hybrid parallel decentralized SGD framework with asynchronous inter-process communication and a novel flexible partitioning scheme to attain scalability up to hundreds of processors. We utilize Message Passing Interface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using different real-world benchmark datasets. Experimental results on a hybrid parallel architecture showed that, compared to the state-of-the-art, the proposed algorithm achieves 6x higher throughput on sparse datasets, while it achieves comparable throughput on relatively dense datasets.

Course

Other identifiers

Book Title

Degree Discipline

Degree Level

Degree Name

Citation

Published Version (Please cite this version)