Profiler and compiler assisted adaptive I/O prefetching for shared storage caches
dc.citation.epage | 121 | en_US |
dc.citation.spage | 112 | en_US |
dc.contributor.author | Son, S. W. | en_US |
dc.contributor.author | Kandemir, M. | en_US |
dc.contributor.author | Kolcu, I. | en_US |
dc.contributor.author | Muralidhara, S. P. | en_US |
dc.contributor.author | Öztürk, Öztürk | en_US |
dc.contributor.author | Karakoy, M. | en_US |
dc.coverage.spatial | Toronto, Ontario, Canada | |
dc.date.accessioned | 2016-02-08T11:36:21Z | |
dc.date.available | 2016-02-08T11:36:21Z | |
dc.date.issued | 2008-10 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description | Date of Conference: 25-29 October, 2008 | |
dc.description | Conference name: PACT '08 Proceedings of the 17th international conference on Parallel architectures and compilation techniques | |
dc.description.abstract | I/O prefetching has been employed in the past as one of the mech- anisms to hide large disk latencies. However, I/O prefetching in parallel applications is problematic when multiple CPUs share the same set of disks due to the possibility that prefetches from different CPUs can interact on shared memory caches in the I/O nodes in complex and unpredictable ways. In this paper, we (i) quantify the impact of compiler-directed I/O prefetching - developed originally in the context of sequential execution - on shared caches at I/O nodes. The experimental data collected shows that while I/O prefetching brings benefits, its effectiveness reduces significantly as the number of CPUs is increased; (ii) identify inter-CPU misses due to harmful prefetches as one of the main sources for this re- duction in performance with the increased number of CPUs; and (iii) propose and experimentally evaluate a profiler and compiler assisted adaptive I/O prefetching scheme targeting shared storage caches. The proposed scheme obtains inter-thread data sharing information using profiling and, based on the captured data sharing patterns, divides the threads into clusters and assigns a separate (customized) I/O prefetcher thread for each cluster. In our approach, the compiler generates the I/O prefetching threads automatically. We implemented this new I/O prefetching scheme using a compiler and the PVFS file system running on Linux, and the empirical data collected clearly underline the importance of adapting I/O prefetching based on program phases. Specifically, our pro- posed scheme improves performance, on average, by 19.9%, 11.9% and http://dx.doi.org/10.3% over the cases without I/O prefetching, with independent I/O prefetching (each CPU is performing compiler-directed I/O prefetching independently), and with one CPU prefetching (one CPU is reserved for prefetching on behalf of others), respectively, when 8 CPUs are used. Copyright 2008 ACM. | en_US |
dc.description.provenance | Made available in DSpace on 2016-02-08T11:36:21Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2008 | en |
dc.identifier.doi | 10.1145/1454115.1454133 | en_US |
dc.identifier.uri | http://hdl.handle.net/11693/26805 | en_US |
dc.language.iso | English | en_US |
dc.publisher | ACM | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1145/1454115.1454133 | en_US |
dc.source.title | Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT | en_US |
dc.subject | Adaptive | en_US |
dc.subject | Compiler | en_US |
dc.subject | Prefetching | en_US |
dc.subject | Profiler | en_US |
dc.subject | Shared Storage Cache | en_US |
dc.subject | Cache memory | en_US |
dc.subject | Decoding | en_US |
dc.subject | Disks (structural components) | en_US |
dc.subject | Parallel architectures | en_US |
dc.subject | Program compilers | en_US |
dc.title | Profiler and compiler assisted adaptive I/O prefetching for shared storage caches | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Profiler and compiler assisted adaptive I O prefetching for shared storage caches.pdf
- Size:
- 1.01 MB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version