Matrix factorization with stochastic gradient descent for recommender systems

buir.advisorAykanat, Cevdet
dc.contributor.authorAktulum, Ömer Faruk
dc.date.accessioned2019-02-26T12:56:48Z
dc.date.available2019-02-26T12:56:48Z
dc.date.copyright2019-02
dc.date.issued2019-02
dc.date.submitted2019-02-25
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (M.S.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2019.en_US
dc.descriptionIncludes bibliographical references (leaves 69-72).en_US
dc.description.abstractMatrix factorization is an efficient technique used for disclosing latent features of real-world data. It finds its application in areas such as text mining, image analysis, social network and more recently and popularly in recommendation systems. Alternating Least Squares (ALS), Stochastic Gradient Descent (SGD) and Coordinate Descent (CD) are among the methods used commonly while factorizing large matrices. SGD-based factorization has proven to be the most successful among these methods after Netflix and KDDCup competitions where the winners’ algorithms relied on methods based on SGD. Parallelization of SGD then became a hot topic and studied extensively in the literature in recent years. We focus on parallel SGD algorithms developed for shared memory and distributed memory systems. Shared memory parallelizations include works such as HogWild, FPSGD and MLGF-MF, and distributed memory parallelizations include works such as DSGD, GASGD and NOMAD. We design a survey that contains exhaustive analysis of these studies, and then particularly focus on DSGD by implementing it through message-passing paradigm and testing its performance in terms of convergence and speedup. In contrast to the existing works, many real-wold datasets are used in the experiments that we produce using published raw data. We show that DSGD is a robust algorithm for large-scale datasets and achieves near-linear speedup with fast convergence rates.en_US
dc.description.degreeM.S.en_US
dc.description.statementofresponsibilityby Ömer Faruk Aktulumen_US
dc.format.extentxiii, 72 leaves : charts ; 30 cm.en_US
dc.identifier.itemidB159740
dc.identifier.urihttp://hdl.handle.net/11693/50632
dc.language.isoEnglishen_US
dc.publisherBilkent Universityen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectRecommender systemen_US
dc.subjectMatrix factorizationen_US
dc.subjectStochastic gradient descenten_US
dc.subjectParallel computingen_US
dc.subjectShared memory algorithmsen_US
dc.subjectDistributed memory algorithmsen_US
dc.titleMatrix factorization with stochastic gradient descent for recommender systemsen_US
dc.title.alternativeÖneri sistemleri için olasılıksal eğim iniş ile matris çarpanlarına ayırmaen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ÖFA_MastersThesis.pdf
Size:
1.18 MB
Format:
Adobe Portable Document Format
Description:
Full printable version