Browsing by Subject "Compilers"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Open Access Automatic code optimization using graph neural networks(2023-01) Peker, MelihCompilers provide hundreds of optimization options, and choosing a good optimization sequence is a complex and time-consuming task. It requires extensive effort and expert input to select a good set of optimization flags. Therefore, there is a lot of research focused on finding optimizations automatically. While most of this research considers using static, spatial, or dynamic features, some of the latest research directly applied deep neural networks on source code. We combined the static features, spatial features, and deep neural networks by rep-resenting source code as graphs and trained Graph Neural Network (GNN) for automatically finding suitable optimization flags. We chose eight binary optimization flags and two benchmark suites, Polybench and cBench. We created a dataset of 12000 graphs using 256 optimization flag combinations on 47 benchmarks. We trained and tested our model using these benchmarks, and our results show that we can achieve a maximum of 48.6%speed-up compared to the case where all optimization flags are enabled.Item Open Access Automatic selection of compiler optimizations by machine learning(IEEE - Institute of Electrical and Electronics Engineers, 2023-08-28) Peker, Melih; Öztürk, Özcan; Yıldırım, S.; Uluyağmur Öztürk, M.Many widely used telecommunications applications have extremely long run times. Therefore, faster and more efficient execution of these codes on the same hardware is important in critical telecommunication applications such as base stations. Compilers greatly affect the properties of the executable program to be created. It is possible to change properties such as compilation speed, execution time, power consumption and code size using compiler flags. This study aims to find the set of flags that will provide the shortest run time among hundreds of compiler flag combinations in GCC using code flow analysis, loop analysis and machine learning methods without running the program.Item Open Access Data locality and parallelism optimization using a constraint-based approach(Academic Press, 2011) Ozturk, O.Embedded applications are becoming increasingly complex and processing ever-increasing datasets. In the context of data-intensive embedded applications, there have been two complementary approaches to enhancing application behavior, namely, data locality optimizations and improving loop-level parallelism. Data locality needs to be enhanced to maximize the number of data accesses satisfied from the higher levels of the memory hierarchy. On the other hand, compiler-based code parallelization schemes require a fresh look for chip multiprocessors as interprocessor communication is much cheaper than off-chip memory accesses. Therefore, a compiler needs to minimize the number of off-chip memory accesses. This can be achieved by considering multiple loop nests simultaneously. Although compilers address these two problems, there is an inherent difficulty in optimizing both data locality and parallelism simultaneously. Therefore, an integrated approach that combines these two can generate much better results than each individual approach. Based on these observations, this paper proposes a constraint network (CN)-based formulation for data locality optimization and code parallelization. The paper also presents experimental evidence, demonstrating the success of the proposed approach, and compares our results with those obtained through previously proposed approaches. The experiments from our implementation indicate that the proposed approach is very effective in enhancing data locality and parallelization. © 2010 Elsevier Inc. All rights reserved.Item Open Access ILP-based energy minimization techniques for banked memories(Association for Computing Machinery, 2008-07) Ozturk, O.; Kandemir, M.Main memories can consume a significant portion of overall energy in many data-intensive embedded applications. One way of reducing this energy consumption is banking, that is, dividing available memory space into multiple banks and placing unused (idle) memory banks into low-power operating modes. Prior work investigated code-restructuring- and data-layout-reorganization-based approaches for increasing the energy benefits that could be obtained from a banked memory architecture. This article explores different techniques that can potentially coexist within the same optimization framework for maximizing benefits of low-power operating modes. These techniques include employing nonuniform bank sizes, data migration, data compression, and data replication. By using these techniques, we try to increase the chances for utilizing low-power operating modes in a more effective manner, and achieve further energy savings over what could be achieved by exploiting low-power modes alone. Specifically, nonuniform banking tries to match bank sizes with application-data access patterns. The goal of data migration is to cluster data with similar access patterns in the same set of banks. Data compression reduces the size of the data used by an application, and thus helps reduce the number of memory banks occupied by data. Finally, data replication increases bank idleness by duplicating select read-only data blocks across banks. We formulate each of these techniques as an ILP (integer linear programming) problem, and solve them using a commercial solver. Our experimental analysis using several benchmarks indicates that all the techniques presented in this framework are successful in reducing memory energy consumption. Based on our experience with these techniques, we recommend to compiler writers for banked memories to consider data compression, replication, and migration. © 2008 ACM.Item Open Access Improving chip multiprocessor reliability through code replication(Pergamon Press, 2010) Ozturk, O.Chip multiprocessors (CMPs) are promising candidates for the next generation computing platforms to utilize large numbers of gates and reduce the effects of high interconnect delays. One of the key challenges in CMP design is to balance out the often-conflicting demands. Specifically, for today's image/video applications and systems, power consumption, memory space occupancy, area cost, and reliability are as important as performance. Therefore, a compilation framework for CMPs should consider multiple factors during the optimization process. Motivated by this observation, this paper addresses the energy-aware reliability support for the CMP architectures, targeting in particular at array-intensive image/video applications. There are two main goals behind our compiler approach. First, we want to minimize the energy wasted in executing replicas when there is no error during execution (which should be the most frequent case in practice). Second, we want to minimize the time to recover (through the replicas) from an error when it occurs. This approach has been implemented and tested using four parallel array-based applications from the image/video processing domain. Our experimental evaluation indicates that the proposed approach saves significant energy over the case when all the replicas are run under the highest voltage/frequency level, without sacrificing any reliability over the latter. © 2009 Elsevier Ltd. All rights reserved.Item Open Access Optimizing shared cache behavior of chip multiprocessors(ACM, 2009-12) Kandemir, M.; Muralidhara, S. P.; Narayanan, S. H. K.; Zhang, Y.; Öztürk, ÖzcanOne of the critical problems associated with emerging chip multiprocessors (CMPs) is the management of on-chip shared cache space. Unfortunately, single processor centric data locality optimization schemes may not work well in the CMP case as data accesses from multiple cores can create conflicts in the shared cache space. The main contribution of this paper is a compiler directed code restructuring scheme for enhancing locality of shared data in CMPs. The proposed scheme targets the last level shared cache that exist in many commercial CMPs and has two components, namely, allocation, which determines the set of loop iterations assigned to each core, and scheduling, which determines the order in which the iterations assigned to a core are executed. Our scheme restructures the application code such that the different cores operate on shared data blocks at the same time, to the extent allowed by data dependencies. This helps to reduce reuse distances for the shared data and improves on-chip cache performance. We evaluated our approach using the Splash-2 and Parsec applications through both simulations and experiments on two commercial multi-core machines. Our experimental evaluation indicates that the proposed data locality optimization scheme improves inter-core conflict misses in the shared cache by 67% on average when both allocation and scheduling are used. Also, the execution time improvements we achieve (29% on average) are very close to the optimal savings that could be achieved using a hypothetical scheme. Copyright 2009 ACM.Item Open Access Using data compression for increasing memory system utilization(Institute of Electrical and Electronics Engineers, 2009-06) Ozturk, O.; Kandemir, M.; Irwin, M. J.The memory system presents one of the critical challenges in embedded system design and optimization. This is mainly due to the ever-increasing code complexity of embedded applications and the exponential increase seen in the amount of data they manipulate. The memory bottleneck is even more important for multiprocessor-system-on-a-chip (MPSoC) architectures due to the high cost of off-chip memory accesses in terms of both energy and performance. As a result, reducing the memory-space occupancy of embedded applications is very important and will be even more important in the next decade. While it is true that the on-chip memory capacity of embedded systems is continuously increasing, the increases in the complexity of embedded applications and the sizes of the data sets they process are far greater. Motivated by this observation, this paper presents and evaluates a compiler-driven approach to data compression for reducing memory-space occupancy. Our goal is to study how automated compiler support can help in deciding the set of data elements to compress/ decompress and the points during execution at which these compressions/decompressions should be performed. We first study this problem in the context of single-core systems and then extend it to MPSoCs where we schedule compressions and decompressions intelligently such that they do not conflict with application execution as much as possible. Particularly, in MPSoCs, one needs to decide which processors should participate in the compression and decompression activities at any given point during the course of execution. We propose both static and dynamic algorithms for this purpose. In the static scheme, the processors are divided into two groups: those performing compression/ decompression and those executing the application, and this grouping is maintained throughout the execution of the application. In the dynamic scheme, on the other hand, the execution starts with some grouping but this grouping can change during the course of execution, depending on the dynamic variations in the data access pattern. Our experimental results show that, in a single-core system, the proposed approach reduces maximum memory occupancy by 47.9% and average memory occupancy by 48.3% when averaged over all the benchmarks. Our results also indicate that, in an MPSoC, the average energy saving is 12.7% when all eight benchmarks are considered. While compressions and decompressions and related bookkeeping activities take extra cycles and memory space and consume additional energy, we found that the improvements they bring from the memory space, execution cycles, and energy perspectives are much higher than these overheads. © 2009 IEEE.