Prefetch throttling and data pinning for improving performance of shared caches
dc.contributor.author | Öztürk, Özcan. | en_US |
dc.contributor.author | Son, S. W. | en_US |
dc.contributor.author | Kandemir, M. | en_US |
dc.contributor.author | Karaköy, M. | en_US |
dc.coverage.spatial | Austin, TX, USA | |
dc.date.accessioned | 2016-02-08T11:36:52Z | |
dc.date.available | 2016-02-08T11:36:52Z | |
dc.date.issued | 2008-11 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description | Date of Conference: 15-21 Nov. 2008 | |
dc.description | Conference name: SC '08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing | |
dc.description.abstract | In this paper, we (i) quantify the impact of compilerdirected I/O prefetching on shared caches at I/O nodes. The experimental data collected shows that while I/O prefetching brings some benefits, its effectiveness reduces significantly as the number of clients (compute nodes) is increased; (ii) identify interclient misses due to harmful I/O prefetches as one of the main sources for this reduction in performance with increased number of clients; and (iii) propose and experimentally evaluate prefetch throttling and data pinning schemes to improve performance of I/O prefetching. Prefetch throttling prevents one or more clients from issuing further prefetches if such prefetches are predicted to be harmful, i.e., replace from the memory cache the useful data accessed by other clients. Data pinning on the other hand makes selected data blocks immune to harmful prefetches by pinning them in the memory cache. We show that these two schemes can be applied in isolation or combined together, and they can be applied at a coarse or fine granularity. Our experiments with these two optimizations using four disk-intensive applications reveal that they can improve performance by 9.7% and 15.1% on average, over standard compiler-directed I/O prefetching and no-prefetch case, respectively, when 8 clients are used. © 2008 IEEE. | en_US |
dc.description.provenance | Made available in DSpace on 2016-02-08T11:36:52Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2008 | en |
dc.identifier.doi | 10.1109/SC.2008.5213128 | en_US |
dc.identifier.uri | http://hdl.handle.net/11693/26822 | |
dc.language.iso | English | en_US |
dc.publisher | IEEE | |
dc.relation.isversionof | http://dx.doi.org/10.1109/SC.2008.5213128 | en_US |
dc.source.title | SC - International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2008 | en_US |
dc.subject | Data blocks | en_US |
dc.subject | Experimental data | en_US |
dc.subject | Fine granularity | en_US |
dc.subject | Improving performance | en_US |
dc.subject | Prefetches | en_US |
dc.subject | Prefetching | en_US |
dc.subject | Shared cache | en_US |
dc.subject | Computer science | en_US |
dc.subject | Cache memory | en_US |
dc.title | Prefetch throttling and data pinning for improving performance of shared caches | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Prefetch throttling and data pinning for improving performance of shared caches.pdf
- Size:
- 580.8 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version