Fast and efficient model parallelism for deep convolutional neural networks
buir.advisor | Özdal, Muhammet Mustafa | |
dc.contributor.author | Eserol, Burak | |
dc.date.accessioned | 2019-08-23T06:41:05Z | |
dc.date.available | 2019-08-23T06:41:05Z | |
dc.date.copyright | 2019-08 | |
dc.date.issued | 2019-08 | |
dc.date.submitted | 2019-08-21 | |
dc.description | Cataloged from PDF version of article. | en_US |
dc.description | Thesis (M.S.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2019. | en_US |
dc.description | Includes bibliographical references (leaves 72-76). | en_US |
dc.description.abstract | Convolutional Neural Networks (CNNs) have become very popular and successful in recent years. Increasing the depth and number of parameters of CNNs has crucial importance on this success. However, it is hard to t deep convolutional neural networks into a single machine's memory and it takes a very long time to train these deep convolutional neural networks. There are two parallelism methods to solve this problem: data parallelism and model parallelism. In data parallelism, the neural network model is replicated among different machines and data is partitioned among them. Each replica trains its data and communicates parameters and their gradients with other replicas. This process results in a huge communication volume in data parallelism, which slows down the training and convergence of the deep neural network. In model parallelism, a deep neural network model is partitioned among different machines and trained in a pipelined manner. However, it requires a human expert to partition the network and it is hard to obtain low communication volume as well as a low computational load balance ratio by using known partitioning methods. In this thesis, a new model parallelism method called hypergraph partitioned model parallelism is proposed. It does not require a human expert to partition the network and obtains a better computational load balance ratio along with better communication volume compared to the existing model parallelism techniques. Besides, the proposed method also reduces the communication volume overhead in data parallelism by 93%. Finally, it is also shown that distributing a deep neural network using the proposed hypergraph partitioned model rather than the existing parallelism methods causes the network to converge faster to the target accuracy. | en_US |
dc.description.provenance | Submitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-08-23T06:41:05Z No. of bitstreams: 1 BurakEserol_Thesis.pdf: 1127957 bytes, checksum: cb92658a3dd513d2a7aa83ef2f8f5f30 (MD5) | en |
dc.description.provenance | Made available in DSpace on 2019-08-23T06:41:05Z (GMT). No. of bitstreams: 1 BurakEserol_Thesis.pdf: 1127957 bytes, checksum: cb92658a3dd513d2a7aa83ef2f8f5f30 (MD5) Previous issue date: 2019-08 | en |
dc.description.statementofresponsibility | by Burak Eserol | en_US |
dc.embargo.release | 2020-02-19 | |
dc.format.extent | xvi, 81 leaves : charts (some color) ; 30 cm. | en_US |
dc.identifier.itemid | B106908 | |
dc.identifier.uri | http://hdl.handle.net/11693/52360 | |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Parallel and distributed deep learning | en_US |
dc.subject | Convolutional Neural Networks | en_US |
dc.subject | Model parallelism | en_US |
dc.subject | Data parallelism | en_US |
dc.title | Fast and efficient model parallelism for deep convolutional neural networks | en_US |
dc.title.alternative | Derin konvolüsyonel sinir ağları için hızlı ve verimli model paralelleştirmesi | en_US |
dc.type | Thesis | en_US |
thesis.degree.discipline | Computer Engineering | |
thesis.degree.grantor | Bilkent University | |
thesis.degree.level | Master's | |
thesis.degree.name | MS (Master of Science) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- BurakEserol_Thesis.pdf
- Size:
- 1.08 MB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: