Using data compression for increasing memory system utilization

dc.citation.epage914en_US
dc.citation.issueNumber6en_US
dc.citation.spage901en_US
dc.citation.volumeNumber28en_US
dc.contributor.authorOzturk, O.en_US
dc.contributor.authorKandemir, M.en_US
dc.contributor.authorIrwin, M. J.en_US
dc.date.accessioned2016-02-08T10:04:03Z
dc.date.available2016-02-08T10:04:03Z
dc.date.issued2009-06en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.description.abstractThe memory system presents one of the critical challenges in embedded system design and optimization. This is mainly due to the ever-increasing code complexity of embedded applications and the exponential increase seen in the amount of data they manipulate. The memory bottleneck is even more important for multiprocessor-system-on-a-chip (MPSoC) architectures due to the high cost of off-chip memory accesses in terms of both energy and performance. As a result, reducing the memory-space occupancy of embedded applications is very important and will be even more important in the next decade. While it is true that the on-chip memory capacity of embedded systems is continuously increasing, the increases in the complexity of embedded applications and the sizes of the data sets they process are far greater. Motivated by this observation, this paper presents and evaluates a compiler-driven approach to data compression for reducing memory-space occupancy. Our goal is to study how automated compiler support can help in deciding the set of data elements to compress/ decompress and the points during execution at which these compressions/decompressions should be performed. We first study this problem in the context of single-core systems and then extend it to MPSoCs where we schedule compressions and decompressions intelligently such that they do not conflict with application execution as much as possible. Particularly, in MPSoCs, one needs to decide which processors should participate in the compression and decompression activities at any given point during the course of execution. We propose both static and dynamic algorithms for this purpose. In the static scheme, the processors are divided into two groups: those performing compression/ decompression and those executing the application, and this grouping is maintained throughout the execution of the application. In the dynamic scheme, on the other hand, the execution starts with some grouping but this grouping can change during the course of execution, depending on the dynamic variations in the data access pattern. Our experimental results show that, in a single-core system, the proposed approach reduces maximum memory occupancy by 47.9% and average memory occupancy by 48.3% when averaged over all the benchmarks. Our results also indicate that, in an MPSoC, the average energy saving is 12.7% when all eight benchmarks are considered. While compressions and decompressions and related bookkeeping activities take extra cycles and memory space and consume additional energy, we found that the improvements they bring from the memory space, execution cycles, and energy perspectives are much higher than these overheads. © 2009 IEEE.en_US
dc.identifier.doi10.1109/TCAD.2009.2017430en_US
dc.identifier.issn0278-0070
dc.identifier.urihttp://hdl.handle.net/11693/22730
dc.language.isoEnglishen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/TCAD.2009.2017430en_US
dc.source.titleIEEE Transactions on Computer - Aided Design of Integrated Circuits and Systemsen_US
dc.subjectData compressionen_US
dc.subjectEmbedded systemen_US
dc.subjectSystem-on-a-chipen_US
dc.subjectComputer scienceen_US
dc.subjectDesign optimizationen_US
dc.subjectMemory managementen_US
dc.subjectScanning probe microscopyen_US
dc.subjectCode complexityen_US
dc.subjectCompilersen_US
dc.subjectData elementsen_US
dc.subjectMemory optimizationen_US
dc.subjectDynamic variationsen_US
dc.subjectEmbedded applicationen_US
dc.subjectEmbedded system designen_US
dc.subjectEnergy perspectivesen_US
dc.subjectExecution cyclesen_US
dc.subjectExponential increaseen_US
dc.subjectHigh costsen_US
dc.subjectMemory bottlenecken_US
dc.subjectMemory spaceen_US
dc.subjectMemory systemsen_US
dc.subjectMultiprocessor system on a chipsen_US
dc.subjectMultiprocessor-system-on-a-chip (MPSoC)en_US
dc.subjectOff-chip memoriesen_US
dc.subjectOn chip memoryen_US
dc.subjectSchedule compressionen_US
dc.subjectStatic and dynamicen_US
dc.subjectApplication specific integrated circuitsen_US
dc.subjectCompactionen_US
dc.subjectData reductionen_US
dc.subjectEmbedded softwareen_US
dc.subjectMicroprocessor chipsen_US
dc.subjectMultiprocessing systemsen_US
dc.subjectOccupational risksen_US
dc.subjectOptimizationen_US
dc.subjectProgram compilersen_US
dc.subjectSet theoryen_US
dc.subjectData compressionen_US
dc.titleUsing data compression for increasing memory system utilizationen_US
dc.typeArticleen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Using data compression for increasing memory system utilization.pdf
Size:
1.08 MB
Format:
Adobe Portable Document Format
Description:
Full printable version