Browsing by Subject "Data compression"
Now showing 1 - 10 of 10
- Results Per Page
- Sort Options
Item Open Access Access pattern-based code compression for memory-constrained systems(Association for Computing Machinery, 2008-09) Ozturk, O.; Kandemir, M.; Chen, G.As compared to a large spectrum of performance optimizations, relatively less effort has been dedicated to optimize other aspects of embedded applications such as memory space requirements, power, real-time predictability, and reliability. In particular, many modern embedded systems operate under tight memory space constraints. One way of addressing this constraint is to compress executable code and data as much as possible. While researchers on code compression have studied efficient hardware and software based code compression strategies, many of these techniques do not take application behavior into account; that is, the same compression/decompression strategy is used irrespective of the application being optimized. This article presents an application-sensitive code compression strategy based on control flow graph (CFG) representation of the embedded program. The idea is to start with a memory image wherein all basic blocks of the application are compressed, and decompress only the blocks that are predicted to be needed in the near future. When the current access to a basic block is over, our approach also decides the point at which the block could be compressed. We propose and evaluate several compression and decompression strategies that try to reduce memory requirements without excessively increasing the original instruction cycle counts. Some of our strategies make use of profile data, whereas others are fully automatic. Our experimental evaluation using seven applications from the MediaBench suite and three large embedded applications reveals that the proposed code compression strategy is very successful in practice. Our results also indicate that working at a basic block granularity, as opposed to a procedure granularity, is important for maximizing memory space savings. © 2008 ACM.Item Open Access Baǧlanırlıkla yönlendirilmiş uyarlamalı dalgacık dönüşümü ile üç boyutlu model sıkıştırılması(IEEE, 2007-06) Köse, Kıvanç; Çetin, A. Enis; Güdükbay, Uğur; Onural, LeventDikdörtgensel olmayan dalgacık dönüşümüne dayalı çok çözünürlüklü üç boyutlu model sıkıştırılması için iki yöntem önerilmektedir. Bunlar Sıradüzensel Ağaç Yapılarının Kümelere Bölütlenmesi (Set Partitioning In Hierarchical Trees - SPIHT) ve JPEG2000 tekniklerine dayanmaktadır. Üç boyutlu modeller düzenli ızgara yapılar üzerinde tanımlı iki boyutlu imgelere dönüştürülmekte, ve bu gösterim bağlanırlıkla yönlendirilmiş uyarlamalı dalgacık dönüşümünden geçirilerek ortaya çıkan dalgacık kümesi verisi SPITH veya JPEG2000 yöntemlerinden biri uygulanarak bit dizgisine dönüştürülmektedir. SPIHT ile elde edilen bit dizgisinin değişik uzunluklardaki bölümlerinden modelin değişik çözünürlüklerde geri çatmak mümkün olduğundan önerilen bu yöntem modellerin aşamalı gösterimine olanak sağlamaktadır. Dalgacık dönüşümü verilerinin SPIHT ile kodlanmasıyla elde edilen sonuç JPEG2000 ve MPEG-3DGC ile yapılan kodlamanın sonucundan daha başarılı olmuştur. Two compression frameworks that are based on a Set Partitioning In Hierarchical Trees (SPIHT) and JPEG2000 methods are proposed. The 3D mesh is first transformed to 2D images on a regular grid structure. Then, this image-like representation is wavelet transformed employing an adaptive predictor that takes advantage of the connectivity information of mesh vertices. Then SPIHT or JPEG2000 is applied on the wavelet domain data. The SPIHT based method is progressive because the resolution of the reconstructed mesh can be changed by varying the length of the one-dimensional data stream created by SPIHT algorithm. The results of the SPIHT based algorith is observed to be superior to JPEG200 based mesh coder and MPEG-3DGC in rate-distortion.Item Open Access Discriminative fine-grained mixing for adaptive compression of data streams(Institute of Electrical and Electronics Engineers, 2014) Gedik, B.This paper introduces an adaptive compression algorithm for transfer of data streams across operators in stream processing systems. The algorithm is adaptive in the sense that it can adjust the amount of compression applied based on the bandwidth, CPU, and workload availability. It is discriminative in the sense that it can judiciously apply partial compression by selecting a subset of attributes that can provide good reduction in the used bandwidth at a low cost. The algorithm relies on the significant differences that exist among stream attributes with respect to their relative sizes, compression ratios, compression costs, and their amenability to application of custom compressors. As part of this study, we present a modeling of uniform and discriminative mixing, and provide various greedy algorithms and associated metrics to locate an effective setting when model parameters are available at run-time. Furthermore, we provide online and adaptive algorithms for real-world systems in which system parameters that can be measured at run-time are limited. We present a detailed experimental study that illustrates the superiority of discriminative mixing over uniform mixing. © 2013 IEEE.Item Open Access Improving multicore system performance through data compression(Wiley, 2017) Öztürk, Özcan; Kandemir, M.; Pllana, S; Xhafa, F.As applications become more and more complex, it is becoming extremely important to have sufficient compute power on the chip. Multicore and many-core systems have been introduced to address this problem. This chapter considers the multicore architecture that is a shared multiprocessor-based system, where a certain number of processors share the same memory address space. It uses a loop nest-based code parallelization strategy for executing array-based applications in this multicore architecture. The chapter focuses on array-based codes mainly because they appear very frequently in scientific computing domain and embedded image/video processing domain. It explores two different strategies for dividing the available processors between compression/decompression and application execution. In static strategy a fixed number of processors are allocated for performing compression/decompression activity, and this allocation is not changed during the course of execution. The main idea behind dynamic strategy is to eliminate the optimal processor selection problem of the static approach.Item Open Access Nonrectangular wavelets for multiresolution mesh analysis and compression(IEEE, 2006) Köse, Kıvanç; Çetin, A. Enis; Güdükbay, Uğur; Onural, LeventWe propose a new Set Partitioning In Hierarchical Trees (SPIHT) based mesh compression framework. The 3D mesh is first transformed to 2D images on a regular grid structure. Then, this image-like representation is wavelet transformed and SPIHT is applied on the wavelet domain data. The method is progressive because the resolution of the reconstructed mesh can be changed by varying the length of the one-dimensional data stream created by SPIHT algorithm. Nearly perfect reconstruction is possible if all of the data stream is received. © 2006 IEEE.Item Open Access Novel compression algorithm based on sparse sampling of 3-D laser range scans(Oxford University Press, 2013) Dobrucali, O.; Barshan, B.Three-dimensional models of environments can be very useful and are commonly employed in areas such as robotics, art and architecture, facility management, water management, environmental/industrial/urban planning and documentation. A 3-D model is typically composed of a large number of measurements. When 3-D models of environments need to be transmitted or stored, they should be compressed efficiently to use the capacity of the communication channel or the storage medium effectively. We propose a novel compression technique based on compressive sampling applied to sparse representations of 3-D laser range measurements. The main issue here is finding highly sparse representations of the range measurements, since they do not have such representations in common domains, such as the frequency domain. To solve this problem, we develop a new algorithm to generate sparse innovations between consecutive range measurements acquired while the sensor moves. We compare the sparsity of our innovations with others generated by estimation and filtering. Furthermore, we compare the compression performance of our lossy compression method with widely used lossless and lossy compression techniques. The proposed method offers a small compression ratio and provides a reasonable compromise between the reconstruction error and processing time. © 2012 The Author 2012. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.Item Open Access A privacy-preserving solution for compressed storage and selective retrieval of genomic data(Cold Spring Harbor Laboratory Press, 2016) Huang Z.; Ayday, E.; Lin, H.; Aiyar, R. S.; Molyneaux, A.; Xu, Z.; Fellay, J.; Steinmetz, L. M.; Hubaux, Jean-PierreIn clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared withBAM, thede factostandard for storing aligned genomic data, SECRAM uses 18%less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based insteadofread-based,and(2)itallowsrandomqueryingofasubregionfromaBAM-likefileinanencryptedform.Ourmethod thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data.Item Open Access Teager energy based feature parameters for robust speech recognition in car noise(IEEE, Piscataway, NJ, United States, 1999) Jabloun, F.; Çetin, A. EnisIn this paper, a new set of speech feature parameters based on multirate signal processing and the Teager Energy Operator is developed. The speech signal is first divided into nonuniform subbands in mel-scale using a multirate filter-bank, then the Teager energies of the subsignals are estimated. Finally, the feature vector is constructed by log-compression and inverse DCT computation. The new feature parameters have a robust speech recognition performance in car engine noise which is low pass in nature.Item Open Access Using data compression for increasing memory system utilization(Institute of Electrical and Electronics Engineers, 2009-06) Ozturk, O.; Kandemir, M.; Irwin, M. J.The memory system presents one of the critical challenges in embedded system design and optimization. This is mainly due to the ever-increasing code complexity of embedded applications and the exponential increase seen in the amount of data they manipulate. The memory bottleneck is even more important for multiprocessor-system-on-a-chip (MPSoC) architectures due to the high cost of off-chip memory accesses in terms of both energy and performance. As a result, reducing the memory-space occupancy of embedded applications is very important and will be even more important in the next decade. While it is true that the on-chip memory capacity of embedded systems is continuously increasing, the increases in the complexity of embedded applications and the sizes of the data sets they process are far greater. Motivated by this observation, this paper presents and evaluates a compiler-driven approach to data compression for reducing memory-space occupancy. Our goal is to study how automated compiler support can help in deciding the set of data elements to compress/ decompress and the points during execution at which these compressions/decompressions should be performed. We first study this problem in the context of single-core systems and then extend it to MPSoCs where we schedule compressions and decompressions intelligently such that they do not conflict with application execution as much as possible. Particularly, in MPSoCs, one needs to decide which processors should participate in the compression and decompression activities at any given point during the course of execution. We propose both static and dynamic algorithms for this purpose. In the static scheme, the processors are divided into two groups: those performing compression/ decompression and those executing the application, and this grouping is maintained throughout the execution of the application. In the dynamic scheme, on the other hand, the execution starts with some grouping but this grouping can change during the course of execution, depending on the dynamic variations in the data access pattern. Our experimental results show that, in a single-core system, the proposed approach reduces maximum memory occupancy by 47.9% and average memory occupancy by 48.3% when averaged over all the benchmarks. Our results also indicate that, in an MPSoC, the average energy saving is 12.7% when all eight benchmarks are considered. While compressions and decompressions and related bookkeeping activities take extra cycles and memory space and consume additional energy, we found that the improvements they bring from the memory space, execution cycles, and energy perspectives are much higher than these overheads. © 2009 IEEE.Item Open Access Word-based compression in full-text retrieval systems(1995) Selçuk, Ali AydınLarge space requirement of a full-text retrieval system can be reduced significantly by data compression. In this study, the problem of compressing the main text of a full-text retrieval system is addressed and performance of several coding techniques for compressing the text database is compared. Experiments show that statistical techniques, such as arithmetic coding and Huffman coding, give the best compression among the implemented; and using a semi-static word-based model, the space needed to store English text is less than one third of the original requirement.