Stochastic Gradient Descent for matrix completion: hybrid parallelization on shared- and distributed-memory systems

Date

2024-01-11

Editor(s)

Advisor

Supervisor

Co-Advisor

Co-Supervisor

Instructor

Source Title

Knowledge-Based Systems

Print ISSN

0950-7051

Electronic ISSN

1872-7409

Publisher

ELSEVIER BV

Volume

283

Issue

Pages

111176-1 - 111176-12

Language

en

Journal Title

Journal ISSN

Volume Title

Series

Abstract

The purpose of this study is to investigate the hybrid parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix completion problem on a high-performance computing platform. We propose a hybrid parallel decentralized SGD framework with asynchronous inter-process communication and a novel flexible partitioning scheme to attain scalability up to hundreds of processors. We utilize Message Passing Interface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using different real-world benchmark datasets. Experimental results on a hybrid parallel architecture showed that, compared to the state-of-the-art, the proposed algorithm achieves 6x higher throughput on sparse datasets, while it achieves comparable throughput on relatively dense datasets.

Course

Other identifiers

Book Title

Citation