Prefetch throttling and data pinning for improving performance of shared caches

dc.contributor.authorÖztürk, Özcan.en_US
dc.contributor.authorSon, S. W.en_US
dc.contributor.authorKandemir, M.en_US
dc.contributor.authorKaraköy, M.en_US
dc.coverage.spatialAustin, TX, USA
dc.date.accessioned2016-02-08T11:36:52Z
dc.date.available2016-02-08T11:36:52Z
dc.date.issued2008-11en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionDate of Conference: 15-21 Nov. 2008
dc.descriptionConference name: SC '08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing
dc.description.abstractIn this paper, we (i) quantify the impact of compilerdirected I/O prefetching on shared caches at I/O nodes. The experimental data collected shows that while I/O prefetching brings some benefits, its effectiveness reduces significantly as the number of clients (compute nodes) is increased; (ii) identify interclient misses due to harmful I/O prefetches as one of the main sources for this reduction in performance with increased number of clients; and (iii) propose and experimentally evaluate prefetch throttling and data pinning schemes to improve performance of I/O prefetching. Prefetch throttling prevents one or more clients from issuing further prefetches if such prefetches are predicted to be harmful, i.e., replace from the memory cache the useful data accessed by other clients. Data pinning on the other hand makes selected data blocks immune to harmful prefetches by pinning them in the memory cache. We show that these two schemes can be applied in isolation or combined together, and they can be applied at a coarse or fine granularity. Our experiments with these two optimizations using four disk-intensive applications reveal that they can improve performance by 9.7% and 15.1% on average, over standard compiler-directed I/O prefetching and no-prefetch case, respectively, when 8 clients are used. © 2008 IEEE.en_US
dc.description.provenanceMade available in DSpace on 2016-02-08T11:36:52Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2008en
dc.identifier.doi10.1109/SC.2008.5213128en_US
dc.identifier.urihttp://hdl.handle.net/11693/26822
dc.language.isoEnglishen_US
dc.publisherIEEE
dc.relation.isversionofhttp://dx.doi.org/10.1109/SC.2008.5213128en_US
dc.source.titleSC - International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2008en_US
dc.subjectData blocksen_US
dc.subjectExperimental dataen_US
dc.subjectFine granularityen_US
dc.subjectImproving performanceen_US
dc.subjectPrefetchesen_US
dc.subjectPrefetchingen_US
dc.subjectShared cacheen_US
dc.subjectComputer scienceen_US
dc.subjectCache memoryen_US
dc.titlePrefetch throttling and data pinning for improving performance of shared cachesen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Prefetch throttling and data pinning for improving performance of shared caches.pdf
Size:
580.8 KB
Format:
Adobe Portable Document Format
Description:
Full printable version