Slicing based code parallelization for minimizing inter-processor communication
dc.citation.epage | 95 | en_US |
dc.citation.spage | 87 | en_US |
dc.contributor.author | Kandemir, M. | en_US |
dc.contributor.author | Zhang, Y. | en_US |
dc.contributor.author | Muralidhara, S. P. | en_US |
dc.contributor.author | Öztürk, Özcan | en_US |
dc.contributor.author | Narayanan, S. H. K. | en_US |
dc.coverage.spatial | Grenoble, France | |
dc.date.accessioned | 2016-02-08T12:25:08Z | |
dc.date.available | 2016-02-08T12:25:08Z | |
dc.date.issued | 2009-10 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description | Date of Conference: 1 -16 October, 2009 | |
dc.description | Conference name: CASES '09 Proceedings of the 2009 international conference on Compilers, architecture, and synthesis for embedded systems | |
dc.description.abstract | One of the critical problems in distributed memory multi-core architectures is scalable parallelization that minimizes inter-processor communication. Using the concept of iteration space slicing, this paper presents a new code parallelization scheme for data-intensive applications. This scheme targets distributed memory multi-core architectures, and formulates the problem of data-computation distribution (partitioning) across parallel processors using slicing such that, starting with the partitioning of the output arrays, it iteratively determines the partitions of other arrays as well as iteration spaces of the loop nests in the application code. The goal is to minimize inter-processor data communications. Based on this iteration space slicing based formulation of the problem, we also propose a solution scheme. The proposed data-computation scheme is evaluated using six data-intensive benchmark programs. In our experimental evaluation, we also compare this scheme against three alternate data-computation distribution schemes. The results obtained are very encouraging, indicating around 10% better speedup, with 16 processors, over the next-best scheme when averaged over all benchmark codes we tested. Copyright 2009 ACM. | en_US |
dc.description.provenance | Made available in DSpace on 2016-02-08T12:25:08Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2009 | en |
dc.identifier.doi | 10.1145/1629395.1629409 | en_US |
dc.identifier.uri | http://hdl.handle.net/11693/28611 | en_US |
dc.language.iso | English | en_US |
dc.publisher | ACM | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1145/1629395.1629409 | en_US |
dc.source.title | CASES '09 Proceedings of the 2009 international conference on Compilers, architecture, and synthesis for embedded systems | en_US |
dc.subject | Automatic code parallelization | en_US |
dc.subject | Code analysis and optimization | en_US |
dc.subject | Iteration space slicing | en_US |
dc.subject | Parallelizing compilers | en_US |
dc.subject | Automatic codes | en_US |
dc.subject | Code analysis | en_US |
dc.subject | Iteration space slicing | en_US |
dc.subject | Iteration spaces | en_US |
dc.subject | Parallelizations | en_US |
dc.subject | Parallelizing compiler | en_US |
dc.subject | Embedded systems | en_US |
dc.subject | Optimization | en_US |
dc.subject | Parallel architectures | en_US |
dc.subject | Program compilers | en_US |
dc.title | Slicing based code parallelization for minimizing inter-processor communication | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Slicing based code parallelization for minimizing inter-processor communication.pdf
- Size:
- 571.98 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version