Browsing by Subject "Runtimes"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Open Access An approach for detecting inconsistencies between behavioral models of the software architecture and the code(2012-07) Çıracı, Selim; Sözer, Hasan; Tekinerdoğan, BedirIn practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the documented architecture and the actual runtime behavior of the system. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and automatically generate runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study. © 2012 IEEE.Item Open Access Building user-defined runtime adaptation routines for stream processing applications(VLDB Endowment, 2012) Jacques-Silva, G.; Gedik, B.; Wagle, R.; Wu, Kun-Lung; Kumar, V.Stream processing applications are deployed as continuous queries that run from the time of their submission until their cancellation. This deployment mode limits developers who need their applications to perform runtime adaptation, such as algorithmic adjustments, incremental job deployment, and application-specific failure recovery. Currently, developers do runtime adaptation by using external scripts and/or by inserting operators into the stream processing graph that are unrelated to the data processing logic. In this paper, we describe a component called orchestrator that allows users to write routines for automatically adapting the application to runtime conditions. Developers build an orchestrator by registering and handling events as well as specifying actuations. Events can be generated due to changes in the system state (e.g., application component failures), built-in system metrics (e.g., throughput of a connection), or custom application metrics (e.g., quality score). Once the orchestrator receives an event, users can take adaptation actions by using the orchestrator actuation APIs. We demonstrate the use of the orchestrator in IBM's System S in the context of three different applications, illustrating application adaptation to changes on the incoming data distribution, to application failures, and on-demand dynamic composition. © 2012 VLDB Endowment.Item Open Access CAPSULE: Language and system support for efficient state sharing in distributed stream processing systems(ACM, 2012) Losa, G.; Kumar, V.; Andrade, H.; Gedik, Buğra; Hirzel, M.; Soulé, R.; Wu, K. -L.Data stream processing applications are often expressed as data flow graphs, composed of operators connected via streams. This structured representation provides a simple yet powerful paradigm for building large-scale, distributed, high-performance applications. However, there are many tasks that require sharing data across operators, and across operators and the runtime using a less structured mechanism than point-to-point data flows. Examples include updating control variables, sending notifications, collecting metrics, building collective models, etc. In this paper we describe CAPSULE, which fills this gap. CAPSULE is a code generation and runtime framework that offers an easy to use and highly flexible framework for developers to realize shared variables (CAPSULE term for shared state) by specifying a data structure (at the programming-language level), and a few associated configuration parameters that qualify the expected usage scenario. Besides the easy of use and flexibility, CAPSULE offers the following important benefits: (1) Custom Code Generation - CAPSULE makes use of user-specified configuration parameters and information from the runtime to generate shared variable servers that are tailored for the specific usage scenario, (2) Composability - CAPSULE supports deployment time composition of the shared variable servers to achieve desired levels of scalability, performance and fault-tolerance, and (3) Extensibility - CAPSULE provides simple interfaces for extending the CAPSULE framework with more protocols, transports, caching mechanisms, etc. We describe the motivation for CAPSULE and its design, report on its implementation status, and then present experimental results. Copyright © 2012 ACM.Item Open Access Dynamic thread and data mapping for NoC based CMPs(IEEE, 2009-07) Kandemir, M.; Öztürk, Özcan; Muralidhara, S. P.Thread mapping and data mapping are two important problems in the context of NoC (network-on-chip) based CMPs (chip multiprocessors). While a compiler can determine suitable mappings for data and threads, such static mappings may not work well for multithreaded applications that go through different execution phases during their execution, each phase with potentially different data access patterns than others. Instead, a dynamic mapping strategy, if its overheads can be kept low, may be a more promising option. In this work, we present dynamic (runtime) thread and data mappings for NoC based CMPs. The goal of these mappings is to reduce the distance between the location of the core that requests data and the core whose local memory contains that requested data. In our experiments, we evaluate our proposed thread mapping and data mapping in isolation as well as in an integrated manner. Copyright 2009 ACM.Item Open Access Interaction-based feature-driven model-transformations for generating E-forms(ACM, 2009-10) Tekinerdoǧan, Bedir; Aktekin, N.One of the basic pillars in Model-Driven Software Development (MDSD) is defined by model transformations and likewise several useful approaches have been proposed in this context. In parallel, domain modeling plays an essential role in MDSD to support the definition of concepts in the domain, and support the model transformation process. In this paper, we will discuss the results of an e-government project for the generation of e-forms from feature models. Very often existing model transformation practices seem to largely adopt a closed world assumption whereby the transformation definitions of models are defined beforehand and interaction with the user at run-time is largely omitted. Our study shows the need for a more interactive approach in model transformations in which e-forms are generated after interaction with the end-user. To show the case we illustrate three different approaches for generation in increasing complexity: (1) offline model transformation without interaction (2) model transformation with initial interaction (3) model-transformation with run-time interaction. Copyright©2009 ACM.Item Open Access Process variation aware thread mapping for chip multiprocessors(IEEE, 2009-04) Hong, S.; Narayanan, S. H. K.; Kandemir, M.; Özturk, ÖzcanWith the increasing scaling of manufacturing technology, process variation is a phenomenon that has become more prevalent. As a result, in the context of Chip Multiprocessors (CMPs) for example, it is possible that identically-designed processor cores on the chip have non-identical peak frequencies and power consumptions. To cope with such a design, each processor can be assumed to run at the frequency of the slowest processor, resulting in wasted computational capability. This paper considers an alternate approach and proposes an algorithm that intelligently maps (and remaps) computations onto available processors so that each processor runs at its peak frequency. In other words, by dynamically changing the thread-to-processor mapping at runtime, our approach allows each processor to maximize its performance, rather than simply using chip-wide lowest frequency amongst all cores and highest cache latency. Experimental evidence shows that, as compared to a process variation agnostic thread mapping strategy, our proposed scheme achieves as much as 29% improvement in overall execution latency, average improvement being 13% over the benchmarks tested. We also demonstrate in this paper that our savings are consistent across different processor counts, latency maps, and latency distributions.With the increasing scaling of manufacturing technology, process variation is a phenomenon that has become more prevalent. As a result, in the context of Chip Multiprocessors (CMPs) for example, it is possible that identically-designed processor cores on the chip have non-identical peak frequencies and power consumptions. To cope with such a design, each processor can be assumed to run at the frequency of the slowest processor, resulting in wasted computational capability. This paper considers an alternate approach and proposes an algorithm that intelligently maps (and remaps) computations onto available processors so that each processor runs at its peak frequency. In other words, by dynamically changing the thread-to-processor mapping at runtime, our approach allows each processor to maximize its performance, rather than simply using chip-wide lowest frequency amongst all cores and highest cache latency. Experimental evidence shows that, as compared to a process variation agnostic thread mapping strategy, our proposed scheme achieves as much as 29% improvement in overall execution latency, average improvement being 13% over the benchmarks tested. We also demonstrate in this paper that our savings are consistent across different processor counts, latency maps, and latency distributions. © 2009 EDAA.Item Open Access River: an intermediate language for stream processing(John Wiley & Sons Ltd., 2016) Soulé R.; Hirzel M.; Gedik, B.; Grimm, R.Summary This paper presents both a calculus for stream processing, named Brooklet, and its realization as an intermediate language, named River. Because River is based on Brooklet, it has a formal semantics that enables reasoning about the correctness of source translations and optimizations. River builds on Brooklet by addressing the real-world details that the calculus elides. We evaluated our system by implementing front-ends for three streaming languages, and three important optimizations, and a back-end for the System S distributed streaming runtime. Overall, we significantly lower the barrier to entry for new stream-processing languages and thus grow the ecosystem of this crucial style of programming.Item Open Access Using dynamic compilation for continuing execution under reduced memory availability(IEEE, 2009-04) Öztürk, Özcan; Kandemir, M.This paper explores the use of dynamic compilation for continuing execution even if one or more of the memory banks used by an application become temporarily unavailable (but their contents are preserved), that is, the number of memory banks available to the application varies at runtime. We implemented the proposed dynamic compilation approach using a code instrumentation system and performed experiments with 12 embedded benchmark codes. The results collected so far are very encouraging and indicate that, even when all the overheads incurred by dynamic compilation are included, the proposed approach still brings significant benefits over an alternate approach that suspends application execution when there is a reduction in memory bank availability and resumes later when all the banks are up and running. © 2009 EDAA.