Slicing based code parallelization for minimizing inter-processor communication

dc.citation.epage95en_US
dc.citation.spage87en_US
dc.contributor.authorKandemir, M.en_US
dc.contributor.authorZhang, Y.en_US
dc.contributor.authorMuralidhara, S. P.en_US
dc.contributor.authorÖztürk, Özcanen_US
dc.contributor.authorNarayanan, S. H. K.en_US
dc.coverage.spatialGrenoble, France
dc.date.accessioned2016-02-08T12:25:08Z
dc.date.available2016-02-08T12:25:08Z
dc.date.issued2009-10en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionDate of Conference: 1 -16 October, 2009
dc.descriptionConference name: CASES '09 Proceedings of the 2009 international conference on Compilers, architecture, and synthesis for embedded systems
dc.description.abstractOne of the critical problems in distributed memory multi-core architectures is scalable parallelization that minimizes inter-processor communication. Using the concept of iteration space slicing, this paper presents a new code parallelization scheme for data-intensive applications. This scheme targets distributed memory multi-core architectures, and formulates the problem of data-computation distribution (partitioning) across parallel processors using slicing such that, starting with the partitioning of the output arrays, it iteratively determines the partitions of other arrays as well as iteration spaces of the loop nests in the application code. The goal is to minimize inter-processor data communications. Based on this iteration space slicing based formulation of the problem, we also propose a solution scheme. The proposed data-computation scheme is evaluated using six data-intensive benchmark programs. In our experimental evaluation, we also compare this scheme against three alternate data-computation distribution schemes. The results obtained are very encouraging, indicating around 10% better speedup, with 16 processors, over the next-best scheme when averaged over all benchmark codes we tested. Copyright 2009 ACM.en_US
dc.description.provenanceMade available in DSpace on 2016-02-08T12:25:08Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2009en
dc.identifier.doi10.1145/1629395.1629409en_US
dc.identifier.urihttp://hdl.handle.net/11693/28611en_US
dc.language.isoEnglishen_US
dc.publisherACMen_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/1629395.1629409en_US
dc.source.titleCASES '09 Proceedings of the 2009 international conference on Compilers, architecture, and synthesis for embedded systemsen_US
dc.subjectAutomatic code parallelizationen_US
dc.subjectCode analysis and optimizationen_US
dc.subjectIteration space slicingen_US
dc.subjectParallelizing compilersen_US
dc.subjectAutomatic codesen_US
dc.subjectCode analysisen_US
dc.subjectIteration space slicingen_US
dc.subjectIteration spacesen_US
dc.subjectParallelizationsen_US
dc.subjectParallelizing compileren_US
dc.subjectEmbedded systemsen_US
dc.subjectOptimizationen_US
dc.subjectParallel architecturesen_US
dc.subjectProgram compilersen_US
dc.titleSlicing based code parallelization for minimizing inter-processor communicationen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Slicing based code parallelization for minimizing inter-processor communication.pdf
Size:
571.98 KB
Format:
Adobe Portable Document Format
Description:
Full printable version