Parallel stochastic gradient descent on multicore architectures

buir.advisorÖzdal, M. Mustafa
dc.contributor.authorGülcan, Selçuk
dc.date.accessioned2020-09-21T07:20:32Z
dc.date.available2020-09-21T07:20:32Z
dc.date.copyright2020-09
dc.date.issued2020-09
dc.date.submitted2020-09-18
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (M.S.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2020.en_US
dc.descriptionIncludes bibliographical references (leaves 65-68).en_US
dc.description.abstractThe focus of the thesis is efficient parallelization of the Stochastic Gradient Descent (SGD) algorithm for matrix completion problems on multicore architectures. Asynchronous methods and block-based methods utilizing 2D grid partitioning for task-to-thread assignment are commonly used approaches for sharedmemory parallelization. However, asynchronous methods can have performance issues due to their memory access patterns, whereas grid-based methods can suffer from load imbalance especially when data sets are skewed and sparse. In this thesis, we first analyze parallel performance bottlenecks of the existing SGD algorithms in detail. Then, we propose new algorithms to alleviate these performance bottlenecks. Specifically, we propose bin-packing-based algorithms to balance thread loads under 2D partitioning. We also propose a grid-based asynchronous parallel SGD algorithm that improves cache utilization by changing the entry update order without affecting the factor update order and rearranging the memory layouts of the latent factor matrices. Our experiments show that the proposed methods perform significantly better than the existing approaches on shared-memory multi-core systems.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2020-09-21T07:20:32Z No. of bitstreams: 1 THESIS.pdf: 2502325 bytes, checksum: c7bdde0e88107c573495f7f8597ce401 (MD5)en
dc.description.provenanceMade available in DSpace on 2020-09-21T07:20:32Z (GMT). No. of bitstreams: 1 THESIS.pdf: 2502325 bytes, checksum: c7bdde0e88107c573495f7f8597ce401 (MD5) Previous issue date: 2020-09en
dc.description.statementofresponsibilityby Selçuk Gülcanen_US
dc.embargo.release2021-03-18
dc.format.extentx, 68 leaves : charts (some color) ; 30 cm.en_US
dc.identifier.itemidB160499
dc.identifier.urihttp://hdl.handle.net/11693/54056
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectStochastic gradient descenten_US
dc.subjectParallel shared memory systemen_US
dc.subjectMatrix completionen_US
dc.subjectPerformance analysisen_US
dc.subjectLoad balancingen_US
dc.titleParallel stochastic gradient descent on multicore architecturesen_US
dc.title.alternativeÇok çekirdekli sistemlerde paralel olasılıksal gradyan alçalmaen_US
dc.typeThesisen_US
thesis.degree.disciplineComputer Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
THESIS.pdf
Size:
2.39 MB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: