Stochastic Gradient Descent for matrix completion: hybrid parallelization on shared- and distributed-memory systems
buir.contributor.author | Büyükkaya , Kemal | |
buir.contributor.author | Karsavuran, M. Ozan | |
buir.contributor.author | Aykanat, Cevdet | |
buir.contributor.orcid | Büyükkaya, Kemal|0000-0002-8135-9917 | |
buir.contributor.orcid | Karsavuran, M. Ozan|0000-0002-0298-3034 | |
buir.contributor.orcid | Aykanat, Cevdet|0000-0002-4559-1321 | |
dc.citation.epage | 111176-12 | en_US |
dc.citation.spage | 111176-1 | |
dc.citation.volumeNumber | 283 | |
dc.contributor.author | Büyükkaya, Kemal | |
dc.contributor.author | Karsavuran, M. Ozan | |
dc.contributor.author | Aykanat, Cevdet | |
dc.date.accessioned | 2024-03-08T18:32:41Z | |
dc.date.available | 2024-03-08T18:32:41Z | |
dc.date.issued | 2024-01-11 | |
dc.department | Department of Computer Engineering | |
dc.description.abstract | The purpose of this study is to investigate the hybrid parallelization of the Stochastic Gradient Descent (SGD) algorithm for solving the matrix completion problem on a high-performance computing platform. We propose a hybrid parallel decentralized SGD framework with asynchronous inter-process communication and a novel flexible partitioning scheme to attain scalability up to hundreds of processors. We utilize Message Passing Interface (MPI) for inter-node communication and POSIX threads for intra-node parallelism. We tested our method by using different real-world benchmark datasets. Experimental results on a hybrid parallel architecture showed that, compared to the state-of-the-art, the proposed algorithm achieves 6x higher throughput on sparse datasets, while it achieves comparable throughput on relatively dense datasets. | |
dc.description.provenance | Made available in DSpace on 2024-03-08T18:32:41Z (GMT). No. of bitstreams: 1 Stochastic_Gradient_Descent_for_matrix_completion_Hybrid_parallelization_on_shared-_and_distributed-memory_systems.pdf: 1891906 bytes, checksum: 85d359f319f219cb6244196f56da4f71 (MD5) Previous issue date: 2024-01 | en |
dc.identifier.doi | 10.1016/j.knosys.2023.111176 | |
dc.identifier.eissn | 1872-7409 | |
dc.identifier.issn | 0950-7051 | |
dc.identifier.uri | https://hdl.handle.net/11693/114431 | |
dc.language.iso | en | |
dc.publisher | ELSEVIER BV | |
dc.relation.isversionof | https://doi.org/10.1016/j.knosys.2023.111176 | |
dc.rights | CC BY 4.0 DEED (Attribution 4.0 International) | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.source.title | Knowledge-Based Systems | |
dc.subject | Stochastic gradient descent | |
dc.subject | matrix completion | |
dc.subject | collaborative filtering | |
dc.subject | matrix factorization | |
dc.subject | distributed-memory systems | |
dc.subject | shared-memory systems | |
dc.subject | hybrid parallelism | |
dc.title | Stochastic Gradient Descent for matrix completion: hybrid parallelization on shared- and distributed-memory systems | |
dc.type | Article |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Stochastic_Gradient_Descent_for_matrix_completion_Hybrid_parallelization_on_shared-_and_distributed-memory_systems.pdf
- Size:
- 1.8 MB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 2.01 KB
- Format:
- Item-specific license agreed upon to submission
- Description: