Cache hierarchy-aware query mapping on emerging multicore architectures

dc.citation.epage415en_US
dc.citation.issueNumber3
dc.citation.spage403en_US
dc.citation.volumeNumber66
dc.contributor.authorÖztürk, Özcanen_US
dc.contributor.authorOrhan, U.en_US
dc.contributor.authorDing, W.en_US
dc.contributor.authorYedlapalli, P.en_US
dc.contributor.authorKandemir, M. T.en_US
dc.date.accessioned2018-04-12T11:44:23Z
dc.date.available2018-04-12T11:44:23Z
dc.date.issued2017en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.description.abstractOne of the important characteristics of emerging multicores/manycores is the existence of 'shared on-chip caches,' through which different threads/processes can share data (help each other) or displace each other's data (hurt each other). Most of current commercial multicore systems on the market have on-chip cache hierarchies with multiple layers (typically, in the form of L1, L2 and L3, the last two being either fully or partially shared). In the context of database workloads, exploiting full potential of these caches can be critical. Motivated by this observation, our main contribution in this work is to present and experimentally evaluate a cache hierarchy-aware query mapping scheme targeting workloads that consist of batch queries to be executed on emerging multicores. Our proposed scheme distributes a given batch of queries across the cores of a target multicore architecture based on the affinity relations among the queries. The primary goal behind this scheme is to maximize the utilization of the underlying on-chip cache hierarchy while keeping the load nearly balanced across domain affinities. Each domain affinity in this context corresponds to a cache structure bounded by a particular level of the cache hierarchy. A graph partitioning-based method is employed to distribute queries across cores, and an integer linear programming (ILP) formulation is used to address locality and load balancing concerns. We evaluate our scheme using the TPC-H benchmarks on an Intel Xeon based multicore. Our solution achieves up to 25 percent improvement in individual query execution times and 15-19 percent improvement in throughput over the default Linux-based process scheduler. © 1968-2012 IEEE.en_US
dc.description.provenanceMade available in DSpace on 2018-04-12T11:44:23Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 179475 bytes, checksum: ea0bedeb05ac9ccfb983c327e155f0c2 (MD5) Previous issue date: 2017en
dc.identifier.doi10.1109/TC.2016.2605682en_US
dc.identifier.urihttp://hdl.handle.net/11693/37575en_US
dc.language.isoEnglishen_US
dc.publisherIEEEen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/TC.2016.2605682en_US
dc.source.titleIEEE Transactions on Computersen_US
dc.subjectSoftware architectureen_US
dc.subjectArchitectureen_US
dc.subjectCacheen_US
dc.subjectScheduleen_US
dc.subjectMulticoreen_US
dc.subjectQueryen_US
dc.titleCache hierarchy-aware query mapping on emerging multicore architecturesen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Cache Hierarchy-Aware Query Mapping on Emerging Multicore Architectures.pdf
Size:
1.43 MB
Format:
Adobe Portable Document Format
Description:
Full Printable Version