Browsing by Subject "Redundancy"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Embargo Architecture for safety–critical transportation systems(Elsevier B.V., 2023-03-15) Ahangari, Hamzeh; Özkök, Yusuf İbrahim; Yıldırım, Asil; Say, Fatih; Atık, Funda; Ozturk, OzcanIn many industrial systems, including transportation, fault tolerance is a key requirement. Usually, faulttolerance is achieved by redundancy, where replication of critical components is used. In the case oftransportation computing systems, this redundancy starts with the processing element. In this paper, we useMarkov models to assess the level of safety with different redundancy techniques used in the literature. Morespecifically, we give implementation details for various architecture options and evaluate one out of two (1oo2)and two out of three (2oo3) implementations. We observe that both 1oo2 and 2oo3 can reduce the averageprobability of failure per hour (PFH) down to 10−7 which provides Level-3 (SIL3) safety according to thestandards.Item Open Access Assessment of information redundancy in ECG signals(IEEE, 1997-09) Acar, Burak; Özçakır, Lütfü; Köymen, HayrettinIn this paper, the morphological information redundancy in standard 12 lead ECG channels is studied. Study is based on decomposing the ECG channels into orthogonal channels by an SVD based algorithm and then reconstructing them. Then 7 of 8 independently recorded ECG channels are decomposed and the missing channel is reconstructed from these orthogonal channels. Thus the unique morphological information content of each ECG channel is assessed through the loss of clinical information in the reconstructed signal. A comparison of the clinical parameters measured from the reconstructed and original ECG is reported.Item Open Access Custom hardware optimizations for reliable and high performance computer architectures(2020-09) Ahangari, HamzehIn recent years, we have witnessed a huge wave of innovations, such as in Artificial Intelligence (AI) and Internet-of-Things (IoT). In this trend, software tools are constantly and increasingly demanding more processing power, which can no longer be met by processors traditionally. In response to this need, a diverse range of hardware, including GPUs, FPGAs, and AI accelerators, are coming to the market every day. On the other hand, while hardware platforms are becoming more power-hungry due to higher performance demand, concurrent reduction in the size of transistors, and placing high emphasis on reducing the voltage, altogether have always been sources of reliability concerns in circuits. This particularly is applicable to error-sensitive applications, such as transportation and aviation industries where an error can be catastrophic. The reliability issues may have other reasons too, like harsh environmental conditions. These two problems of modern electronic circuits, meaning the need for higher performance and reliability at the same time, require appropriate solutions. In order to satisfy both the performance and the reliability constraints either designs based on reconfigurable circuits, such as FPGAs, or designs based on Commercial-Off-The-Shelf (COTS) components like general-purpose processors, can be an appropriate approach because the platforms can be used in a wide variety of applications. In this regard, three solutions have been proposed in this thesis. These solutions target 1) safety and reliability at the system-level using redundant processors, 2) performance at the architecture-level using multiple accelerators, and 3) reliability at the circuit-level through the use of redundant transistors. Specifically, in the first work, the contribution of some prevalent parameters in the design of safetycritical computers, using COTS processors, is discussed. Redundant architectures are modeled by the Markov chains, and sensitivity of system safety to parameters has been analyzed. Most importantly, the significant presence of Common Cause Failures (CCFs) has been investigated. In the second work, the design, and implementation of an HLS-based, FPGA-accelerated, high-throughput/work-efficient, synthesizable template-based graph processing framework has been presented. The template framework is simplified for easy mapping to FPGA, even for software programmers. The framework is particularly experimented on Intel state-ofthe-art Xeon+FPGA platform to implement iterative graph algorithms. Beside high-throughput pipeline, work-efficient mode significantly reduces total graph processing run-time with a novel active-list design. In the third work, Joint SRAM (JSRAM) cell, a novel circuit-level technique to exploit the trade-off between reliability and memory size, is introduced. This idea is applicable to any SRAM structure like cache memory, register file, FPGA block RAM, or FPGA look-up table (LUT), and even latches and Flip-Flops. In fault-prone conditions, the structure can be configured in such a way that four cells are combined together at the circuit level to form one large and robust memory bit. Unlike prevalent hardware redundancy techniques, like Triple Modular Redundancy (TMR), there is no explicit majority voter at the output. The proposed solution mainly focuses on transient faults, where the reliable mode can provide auto-correction and full immunity against single faults.Item Open Access Diversity and novelty in information retrieval(ACM, 2013-07-08) Santos, R. L. T.; Castells, P.; Altıngövde, I. S.; Can, FazlıThis tutorial aims to provide a unifying account of current research on diversity and novelty in different IR domains, namely, in the context of search engines, recommender sys- tems, and data streams.Item Open Access Diversity and novelty in web search, recommender systems and data streams(Association for Computing Machinery, 2014-02) Santos, R. L. T.; Castells, P.; Altingovde, I. S.; Can, FazlıThis tutorial aims to provide a unifying account of current research on diversity and novelty in the domains of web search, recommender systems, and data stream processing.Item Open Access The effect of distribution of information on recovery of propagating signals(2015-09) Karabulut, ÖzgecanInterpolation is one of the fundamental concepts in signal processing. The analysis of the di fficulty of interpolation of propagating waves is the subject of this thesis. It is known that the information contained in a propagating wave fi eld can be fully described by its uniform samples taken on a planar surface transversal to the propagation direction, so the eld can be found anywhere in space by using the wave propagation equations. However in some cases, the sample locations may be irregular and/or nonuniform. We are concerned with interpolation from such samples. To be able to reduce the problem to a pure mathematical form, the fractional Fourier transform is used thanks to the direct analogy between wave propagation and fractional Fourier transformation. The linear relationship between each sample and the unknown field distribution is established this way. These relationships, which constitute a signal recovery problem based on multiple partial fractional Fourier transform information, are analyzed. Recoverability of the fi eld is examined by comparing the condition numbers of the constructed matrices corresponding to di fferent distributions of the available samples.Item Open Access Motion-compensated prediction based algorithm for medical image sequence compression(Elsevier BV, 1995-09) Oǧuz, S. H.; Gerek, Ö. N.; Çetin, A. EnisA method for irreversible compression of medical image sequences is described. The method relies on discrete cosine transform and motion-compensated prediction to reduce intra- and inter-frame redundancies in medical image sequences. Simulation examples are presented.Item Open Access A simulation study of two-level forward error correction for lost packet recovery in B-ISDN/ATM(IEEE, 1993) Oğuz, Nihat Cem; Ayanoğlu, E.The major source of errors in B-ISDN/ATM systems is expected to be buffer overflow during congested conditions, resulting in lost packets. A single lost or errored ATM cell will cause retransmission of the entire packet data unit (PDU) that it belongs to. The performance of the end-to-end system can be made much less sensitive to cell loss by means of forward error correction. In this paper, we present the results of a simulation study for an ATM network where forward error correction is performed at both the cell level and the PDU level. The results indicate that (i) cell losses are highly correlated in time, and analytical models ignoring this fact will not yield accurate results, (ii) the correlation of cell losses is similar to burst errors in digital communication, and similar code interleaving techniques should be used, (iii) coding cells and PDUs separately provides this interleaving effect, and this joint code outperforms coding only at the cell level or only at the PDU level in almost all cases simulated.