Matrix factorization with stochastic gradient descent for recommender systems
buir.advisor | Aykanat, Cevdet | |
dc.contributor.author | Aktulum, Ömer Faruk | |
dc.date.accessioned | 2019-02-26T12:56:48Z | |
dc.date.available | 2019-02-26T12:56:48Z | |
dc.date.copyright | 2019-02 | |
dc.date.issued | 2019-02 | |
dc.date.submitted | 2019-02-25 | |
dc.description | Cataloged from PDF version of article. | en_US |
dc.description | Thesis (M.S.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2019. | en_US |
dc.description | Includes bibliographical references (leaves 69-72). | en_US |
dc.description.abstract | Matrix factorization is an efficient technique used for disclosing latent features of real-world data. It finds its application in areas such as text mining, image analysis, social network and more recently and popularly in recommendation systems. Alternating Least Squares (ALS), Stochastic Gradient Descent (SGD) and Coordinate Descent (CD) are among the methods used commonly while factorizing large matrices. SGD-based factorization has proven to be the most successful among these methods after Netflix and KDDCup competitions where the winners’ algorithms relied on methods based on SGD. Parallelization of SGD then became a hot topic and studied extensively in the literature in recent years. We focus on parallel SGD algorithms developed for shared memory and distributed memory systems. Shared memory parallelizations include works such as HogWild, FPSGD and MLGF-MF, and distributed memory parallelizations include works such as DSGD, GASGD and NOMAD. We design a survey that contains exhaustive analysis of these studies, and then particularly focus on DSGD by implementing it through message-passing paradigm and testing its performance in terms of convergence and speedup. In contrast to the existing works, many real-wold datasets are used in the experiments that we produce using published raw data. We show that DSGD is a robust algorithm for large-scale datasets and achieves near-linear speedup with fast convergence rates. | en_US |
dc.description.provenance | Submitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-02-26T12:56:48Z No. of bitstreams: 1 thesis.pdf: 1232672 bytes, checksum: 3a376f77264b7b1608c9597b9761778d (MD5) | en |
dc.description.provenance | Made available in DSpace on 2019-02-26T12:56:48Z (GMT). No. of bitstreams: 1 thesis.pdf: 1232672 bytes, checksum: 3a376f77264b7b1608c9597b9761778d (MD5) Previous issue date: 2019-02 | en |
dc.description.statementofresponsibility | by Ömer Faruk Aktulum | en_US |
dc.format.extent | xiii, 72 leaves : charts ; 30 cm. | en_US |
dc.identifier.itemid | B159740 | |
dc.identifier.uri | http://hdl.handle.net/11693/50632 | |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Recommender system | en_US |
dc.subject | Matrix factorization | en_US |
dc.subject | Stochastic gradient descent | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Shared memory algorithms | en_US |
dc.subject | Distributed memory algorithms | en_US |
dc.title | Matrix factorization with stochastic gradient descent for recommender systems | en_US |
dc.title.alternative | Öneri sistemleri için olasılıksal eğim iniş ile matris çarpanlarına ayırma | en_US |
dc.type | Thesis | en_US |
thesis.degree.discipline | Computer Engineering | |
thesis.degree.grantor | Bilkent University | |
thesis.degree.level | Master's | |
thesis.degree.name | MS (Master of Science) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- ÖFA_MastersThesis.pdf
- Size:
- 1.18 MB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version