Browsing by Subject "Reliability"
Now showing 1 - 20 of 44
- Results Per Page
- Sort Options
Item Open Access Analysis of assembly systems for interdeparture time variability and throughput(Taylor & Francis, 2002) Sabuncuoğlu, İ.; Erel, E.; Kok, A. G.This paper studies the effect of the number of component stations (parallelism), work transfer, processing time distributions, buffers and buffer allocation schemes on throughput and interdeparture time variability of assembly systems, As an alternative to work transfer, variability transfer is introduced and its effectiveness is assessed. Previous research has indicated that the optimal throughput displays an anomaly at certain processing time distributions and, this phenomenon is now thoroughly analyzed and the underlying details are uncovered. This study also yields several new findings that convey important practical implications.Item Open Access Analysis of design parameters in safety-critical computers(IEEE Computer Society, 2018) Ahangari, H.; Atik, F.; Ozkok, Y. I.; Yildirim, A.; Ata, S. O.; Ozturk, O.Nowadays, safety-critical computers are extensively used in many civil domains like transportation including railways, avionics, and automotive. In evaluating these safety critical systems, previous studies considered different metrics, but some of safety design parameters like failure diagnostic coverage (C) or common cause failure (CCF) ratio have not been seriously taken into account. Moreover, in some cases safety has not been compared with standard safety integrity levels (IEC-61508: SIL1-SIL4) or even have not met them. Most often, it is not very clear that which part of the system is the Achilles heel and how design can be improved to reach standard safety levels. Motivated by such design ambiguities, we aim to study the effect of various design parameters on safety in some prevalent safety configurations, namely, 1oo2 and 2oo3, where 1oo1 is also used as a reference. By employing Markov modeling, we analyzed the sensitivity of safety to important parameters including: failure rate of processor, failure diagnostic coverage, CCF ratio, test and repair rates. This study aims to provide a deeper understanding on the influence of variation in design parameters over safety. Consequently, to meet appropriate safety integrity level, instead of improving some parts of a system blindly, it will be possible to make an informed decision on more relevant parameters. IEEEItem Open Access An analysis of manipulated information and respective alternative costs in information systems and in decision making structures(International Institute of Informatics and Systemics, IIIS, 2006) Güvenen O.; Öztürk, M.H.Today Information Technologies create base for the most important decision support systems for the practices in academia, business and politics. The effectiveness and success of operations that are supported by information systems are directly correlated with the quantity, accuracy, timing, credibility and the quality of the information that prevails in the system. Rapid development of these technologies in recent decades allows high level of information transaction and communication through the whole world. The quantity of information that flows through information systems has increased tremendously. New researches and technological applications in this area aim to improve the systems quantitatively. However, despite a huge and continuous increase in information flow, the quality and reliability of the information in the systems are doubtful from many perspectives. We believe that quality and reliability considerations in information technologies are not handled by researchers and users adequately. So in this research we decided to discuss about quality and reliability aspects of the information flow. To be able to evaluate the information from qualitative perspectives, we believe that it is crucial to handle the problem in science and especially in social sciences by endogenising socio-economic phenomena and science methodology approaches. We hope this work will create a stimulus for researchers of Information Technologies and Systems to give importance to the reliability and quality of information issues.Item Embargo Architecture for safety–critical transportation systems(Elsevier B.V., 2023-03-15) Ahangari, Hamzeh; Özkök, Yusuf İbrahim; Yıldırım, Asil; Say, Fatih; Atık, Funda; Ozturk, OzcanIn many industrial systems, including transportation, fault tolerance is a key requirement. Usually, faulttolerance is achieved by redundancy, where replication of critical components is used. In the case oftransportation computing systems, this redundancy starts with the processing element. In this paper, we useMarkov models to assess the level of safety with different redundancy techniques used in the literature. Morespecifically, we give implementation details for various architecture options and evaluate one out of two (1oo2)and two out of three (2oo3) implementations. We observe that both 1oo2 and 2oo3 can reduce the averageprobability of failure per hour (PFH) down to 10−7 which provides Level-3 (SIL3) safety according to thestandards.Item Open Access Asymptotic analysis of reliability for switching systems in light and heavy traffic conditions(Birkhäuser, Boston, 2000) Anisimov, Vladimir V.; Limnios, N.; Nikulin, M.An asymptotic analysis of flows of rare events switched by some random environment is provided. An approximation by nonhomogeneous Poisson flows in case of mixing environment is studied. Special notions of S-set and “monotone” structure for finite Markov environment are introduced. An approximation by Poisson flows with Markov switches in case of asymptotically consolidated environment is proved. An analysis of the 1st exit time from a subset is also given. In heavy traffic conditions an averaging principle for trajectories with Poisson approximation for flows of rare events in systems with fast switches is proved. The method of proof is based on limit theorems for processes with semi-Markov switches. Applications to the reliability analysis of state-dependent Markov and semi-Markov queueing systems in light and heavy traffic conditions are consideredItem Open Access Bluetooth broadcasting performance: reliability and throughput(Springer, Berlin, Heidelberg, 2006) Doğan, Kaan; Gürel, Güray; Kamçı, A. Kerim; Körpeoğlu, İbrahimThis paper studies the performance of Bluetooth broadcasting scheme. The transmission of a Bluetooth broadcast packet is repeated several times to increase the reliability of broadcast. We have analyzed the effects of baseband ACL packet type preference, on the broadcast performance in terms of reliability and effective throughput, over different channel characteristics (i.e. bit error rate). As the result of our analysis, we determined the optimal packet type and re-transmission count combinations that can provide the highest effective throughput values for various practical BER ranges. These results can be used at Bluetooth baseband layer to dynamically adapt to varying channel conditions and to achieve a good broadcast performance.Item Open Access Boolean normal forms, shellability, and reliability computations(Society for Industrial and Applied Mathematics, 2000) Borgs, E.; Crama, Y.; Ekin, O.; Hammer, P.L.; Ibaraki, T.; Kogan, A.Orthogonal forms of positive Boolean functions play an important role in reliability theory, since the probability that they take value 1 can be easily computed. However, few classes of disjunctive normal forms are known for which orthogonalization can be efficiently performed. An interesting class with this property is the class of shellable disjunctive normal forms (DNFs). In this paper, we present some new results about shellability. We establish that every positive Boolean function can be represented by a shellable DNF, we propose a polynomial procedure to compute the dual of a shellable DNF, and we prove that testing the so-called lexico-exchange (LE) property (a strengthening of shellability) is NP-complete.Item Open Access COMD-free continuous-wave high-power laser diodes by using the multi-section waveguide method(SPIE - International Society for Optical Engineering, 2023-03-14) Demir, Abdullah; Ebadi, Kaveh; Liu, Y.; Sünnetçioğlu, Ali Kaan; Gündoğdu, Sinan; Şengül, Serdar; Zhao, Y.; Lan, Y.; Yang, G.; Zediker, Mark S.; Zucker, Erik P.Catastrophic optical mirror damage (COMD) limits the output power and reliability of laser diodes (LDs). The self-heating of the laser contributes to the facet temperature, but it has not been addressed so far. This study investigates a two-section waveguide method targeting significantly reduced facet temperatures. The LD waveguide is divided into two electrically isolated sections along the cavity: laser and passive waveguide. The laser section is pumped at high current levels to achieve laser output. The passive waveguide is biased at low injection currents to obtain a transparent waveguide with negligible heat generation. This design limits the thermal impact of the laser section on the facet, and a transparent waveguide allows lossless transport of the laser to the output facet. Fabricated GaAs-based LDs have waveguide dimensions of (5-mm) x (100-µm) with passive waveguide section lengths varied from 250 to 1500 µm. The lasers were operated continuous-wave up to the maximum achievable power of around 15 W. We demonstrated that the two-section waveguide method effectively separates the heat load of the laser from the facet and results in much lower facet temperatures (Tf). For instance, at 8 A of laser current, the standard laser has Tf = 90 oC, and a two-section laser with a 1500 µm long passive waveguide section has Tf = 60 oC. While traditional LDs show COMD failures, the multi-section waveguide LDs are COMD-free. Our technique and results provide a pathway for high-reliability LDs, which would find diverse applications in semiconductor lasers. © 2023 SPIE.Item Open Access Comparison of different balance scales in Parkinson's disease(Turkey Association of Physiotherapists, 2009) Gündüz, A. G.; Otman, A. S.; Kose, N.; Bilgin, S.; Elibol, B.Purpose: The main purpose of our study is finding out whether different methods used in evaluating balance are reliable and valid for Parkinson Disease. Material and methods: In the study, thirty idiopathic Parkinson patients were evaluated by Berg Balance Scale, Tinetti Performance Oriented Balance and Gait Scale, and clinical balance and mobility tests at their "off" and "on" periods. Additionally; the patients were evaluated by motor evaluation part of Unified Parkinson's Disease Rating Scale, Modified Hoehn and Yahr Scale. All the evaluation tests were repeated 7 days after the first applications. Results: Comparisons revealed that all the balance evaluation tests were reliable and valid for Parkinson patients. On the other hand, it was also revealed that, Berg Balance Scale is more reliable (ICC=0.99) and showing higher correlation with motor part of Unified Parkinson's Disease Rating Scale (r=-0.75, p<0.05) and Modified Hoehn and Yahr Scale (r=-0.75/0.71, p<0.05). Conclusion: As a result of our study, that Berg Balance Scale, Tinetti Performance Oriented Balance and Gait Scale, clinical balance and mobility tests can be applied to Parkinson disease patients reliably, and among these tests Berg Balance Scale gives more comprehensive information regarding evaluation of different parameters of balance.Item Open Access Compiler directed network-on-chip reliability enhancement for chip multiprocessors(Association for Computing Machinery, 2010-04) Ozturk, O.; Kandemir, M.; Irwin, M. J.; Narayanan, S.H. K.Chip multiprocessors (CMPs) are expected to be the building blocks for future computer systems. While architecting these emerging CMPs is a challenging problem on its own, programming them is even more challenging. As the number of cores accommodated in chip multiprocessors increases, network-on-chip (NoC) type communication fabrics are expected to replace traditional point-to-point buses. Most of the prior software related work so far targeting CMPs focus on performance and power aspects. However, as technology scales, components of a CMP are being increasingly exposed to both transient and permanent hardware failures. This paper presents and evaluates a compiler-directed power-performance aware reliability enhancement scheme for network-on-chip (NoC) based chip multiprocessors (CMPs). The proposed scheme improves on-chip communication reliability by duplicating messages traveling across CMP nodes such that, for each original message, its duplicate uses a different set of communication links as much as possible (to satisfy performance constraint). In addition, our approach tries to reuse communication links across the different phases of the program to maximize link shutdown opportunities for the NoC (to satisfy power constraint). Our results show that the proposed approach is very effective in improving on-chip network reliability, without causing excessive power or performance degradation. In our experiments, we also evaluate the performance oriented and energy oriented versions of our compiler-directed reliability enhancement scheme, and compare it to two pure hardware based fault tolerant routing schemes. © 2010 ACM.Item Open Access Compiler-supported selective software fault tolerance(IEEE, 2023-11-15) Turhan, Tuncer; Tekgul, H.; Öztürk, ÖzcanAs technology advances, the processors are shrunk in size and manufactured using higher-density transistors, making them cheaper, more power efficient, and more powerful. While this progress is most beneficial to end-users, these advances make processors more vulnerable to outside radiation, causing soft errors, mostly in single-bit flips on data. In applications where a certain margin of error is acceptable, and availability is important, the existing software fault tolerance techniques may not be applied directly because of the unacceptable performance overheads they introduce to the system. We propose a technique that ranks the instructions in terms of their criticality and generates a more reliable source code. This way, we improve reliability and minimize the performance overheads.Item Open Access Cultural affordances: Does model reliability affect over-imitation in preschoolers(Elsevier Ltd, 2021-03) Jedediah W.P., Allen; Sümer, Cansu; Ilgaz, HandeOne general perspective on why children over-imitate is that they are learning about the normatively correct way of doing things. If correct, then characteristics of the demonstrator should be relevant. Accordingly, the current study aimed to investigate how the reliability of an adult model influences children’s selectivity of what to imitate in an over-imitation situation (i.e., when some of the actions are causally irrelevant). Seventy-eight preschoolers between 3 and 6 years of age participated at school or in the lab on four tasks. A canonical trust paradigm was used to manipulate model reliability in terms of past accuracy. Children then watched either the reliable or unreliable model open a transparent box using the same relevant and irrelevant actions. In addition, children completed a standard ToM battery. Results indicated that children were more likely to over-imitate from a demonstration given by the reliable versus unreliable model. Children’s ToM abilities were not related to their over-imitation behavior but showed some relations to their trust performance. Overall, the results provide support for a social situational approach to over-imitation that fits most closely with the norm learning perspective.Item Open Access Custom hardware optimizations for reliable and high performance computer architectures(2020-09) Ahangari, HamzehIn recent years, we have witnessed a huge wave of innovations, such as in Artificial Intelligence (AI) and Internet-of-Things (IoT). In this trend, software tools are constantly and increasingly demanding more processing power, which can no longer be met by processors traditionally. In response to this need, a diverse range of hardware, including GPUs, FPGAs, and AI accelerators, are coming to the market every day. On the other hand, while hardware platforms are becoming more power-hungry due to higher performance demand, concurrent reduction in the size of transistors, and placing high emphasis on reducing the voltage, altogether have always been sources of reliability concerns in circuits. This particularly is applicable to error-sensitive applications, such as transportation and aviation industries where an error can be catastrophic. The reliability issues may have other reasons too, like harsh environmental conditions. These two problems of modern electronic circuits, meaning the need for higher performance and reliability at the same time, require appropriate solutions. In order to satisfy both the performance and the reliability constraints either designs based on reconfigurable circuits, such as FPGAs, or designs based on Commercial-Off-The-Shelf (COTS) components like general-purpose processors, can be an appropriate approach because the platforms can be used in a wide variety of applications. In this regard, three solutions have been proposed in this thesis. These solutions target 1) safety and reliability at the system-level using redundant processors, 2) performance at the architecture-level using multiple accelerators, and 3) reliability at the circuit-level through the use of redundant transistors. Specifically, in the first work, the contribution of some prevalent parameters in the design of safetycritical computers, using COTS processors, is discussed. Redundant architectures are modeled by the Markov chains, and sensitivity of system safety to parameters has been analyzed. Most importantly, the significant presence of Common Cause Failures (CCFs) has been investigated. In the second work, the design, and implementation of an HLS-based, FPGA-accelerated, high-throughput/work-efficient, synthesizable template-based graph processing framework has been presented. The template framework is simplified for easy mapping to FPGA, even for software programmers. The framework is particularly experimented on Intel state-ofthe-art Xeon+FPGA platform to implement iterative graph algorithms. Beside high-throughput pipeline, work-efficient mode significantly reduces total graph processing run-time with a novel active-list design. In the third work, Joint SRAM (JSRAM) cell, a novel circuit-level technique to exploit the trade-off between reliability and memory size, is introduced. This idea is applicable to any SRAM structure like cache memory, register file, FPGA block RAM, or FPGA look-up table (LUT), and even latches and Flip-Flops. In fault-prone conditions, the structure can be configured in such a way that four cells are combined together at the circuit level to form one large and robust memory bit. Unlike prevalent hardware redundancy techniques, like Triple Modular Redundancy (TMR), there is no explicit majority voter at the output. The proposed solution mainly focuses on transient faults, where the reliable mode can provide auto-correction and full immunity against single faults.Item Open Access The distribution of the residual lifetime and its applications(1991) Çağlar, Mine AlpLet T be a continuous positive random variable representing the lifetime of an entitle This entity could be a human being, an animal or a plant, or a component of a mechanical or electrical system. For nonliving objects the lifetime is defined as the total amount of time for which the entitj'^ carries out its function satisfactoriljc The concept of aging involves the adverse effects of age such as increased probability of failure due to wear. In this thesis, we consider certain characteristics of the residual lifetime distribution at age t, such as the mean, median, and variance, as descril)ing aging. The following families of statistical distributions are studied from this point of view: 1. Gamma with two parameters, 2. Weil^ull with two paxameters, .3. Lognormal with two parameters, 4. Inverse Poljmomial with one parameter. Gamma and Weil)ull distrilDutions are fitted to actual data.Item Open Access An evaluation of the reliability of probability judgments across response modes and over time(Wiley, 1993) Whitcomb, K. M.; Önkal D.; Benson, P. G.; Curley, S. P.Despite the importance of probability assessment methods in behavioral decision theory and decision analysis, little attention has been directed at evaluating their reliability and validity. In fact, no comprehensive study of reliability has been undertaken. Since reliability is a necessary condition for validity, this oversight is significant. The present study was motivated by that oversight. We investigated the reliability of probability measures derived from three response modes; numerical probabilities, pie diagrams, and odds. Unlike previous studies, the experiment was designed to distinguish systematic deviations in probability judgments, such as those due to experience or practice, from random deviations. It was found that subjects assessed probabilities reliably lor all three assessment methods regardless of the reliability measures employed. However, a small but statistically significant decrease over time in the magnitudes of assessed probabilities was observed. This effect was linked to a decrease in subjects' overconfidence during the course of the experiment.Item Open Access Facet cooling in high-power InGaAs/AlGaAs lasers(Institute of Electrical and Electronics Engineers Inc., 2019) Arslan, Seval; Gündoğdu, Sinan; Demir, Abdullah; Aydınlı, A.Several factors limit the reliable output power of a semiconductor laser under CW operation, such as carrier leakage, thermal effects, and catastrophic optical mirror damage (COMD). Ever higher operating powers may be possible if the COMD can be avoided. Despite exotic facet engineering and progress in non-absorbing mirrors, the temperature rise at the facets puts a strain on the long-term reliability of these diodes. Although thermoelectrically isolating the heat source away from the facets with non-injected windows helps lower the facet temperature, data suggests the farther the heat source is from the facets, the lower the temperature. In this letter, we show that longer non-injected sections lead to cooler windows and biasing this section to transparency eliminates the optical loss. We report on the facet temperature reduction that reaches below the bulk temperature in high power InGaAs/AlGaAs lasers under QCW operation with electrically isolated and biased windows. Acting as transparent optical interconnects, biased sections connect the active cavity to the facets. This approach can be applied to a wide range of semiconductor lasers to improve device reliability as well as enabling the monolithic integration of lasers in photonic integrated circuits.Item Open Access Fair task allocation in crowdsourced delivery(Institute of Electrical and Electronics Engineers, 2018) Basik, F.; Gedik, B.; Ferhatosmanoglu, H.; Wu, K.Faster and more cost-efficient, crowdsourced delivery is needed to meet the growing customer demands of many industries. In this work, we introduce a new crowdsourced delivery platform that takes fairness towards workers into consideration, while maximizing the task completion ratio. Since redundant assignments are not possible in delivery tasks, we first introduce a 2-phase assignment model that increases the reliability of a worker to complete a given task. To realize the effectiveness of our model in practice, we present both offline and online versions of our proposed algorithm called F-Aware. Given a task-to-worker bipartite graph, F-Aware assigns each task to a worker that maximizes fairness, while allocating tasks to use worker capacities as much as possible. We present an evaluation of our algorithms with respect to running time, task completion ratio, as well as fairness and assignment ratio. Experiments show that F-Aware runs around $10^7\times$ faster than the TAR-optimal solution and assigns 96.9% of the tasks that can be assigned by it. Moreover, it is shown that, F-Aware is able to provide a much fair distribution of tasks to workers than the best competitor algorithm. IEEEItem Open Access Fen ve teknoloji dersi öğretmen adaylarının bilimsel süreç becerilerinin ölçülmesine ilişkin bir test geliştirme çalışması(Ekip Ltd. Sti., 2013) Karslı, F.; Ayas, AlipaşaBu çalışmada, fen ve teknoloji dersi öğretmen adaylarının bilimsel süreç becerilerini ölçmeye yönelik, geçerliği ve güvenirliği sağlanmış çoklu formda bilimsel süreç becerileri testi (BİSBET) geliştirmek amaçlanmıştır. Bu amaçla ölçülecek davranışın niteliği dikkate alınarak, son zamanlarda program geliştirmeciler tarafından da öngörülen ölçme-değerlendirme tekniklerine uygun, 25’i çoktan seçmeli ve 11’i açık uçlu yapıda olmak üzere toplam 36 maddeden oluşan test geliştirilmiştir. Toplam 197 fen ve teknoloji dersi öğretmen adayına uygulanan testin geçerlik, güvenirlik çalışmaları ve madde analizleri yapılmıştır. Bilimsel süreç becerilerini ölçmeye yönelik geliştirilen testin kapsam geçerliğine kanıt sağlamak için uzman görüşlerine, yapı geçerliğine kanıt sağlamak için ise hipotez testi yöntemine başvurulmuştur. Testin güvenirliği; çoktan seçmeli test maddeleri için iç tutarlılık analizi yöntemi ile açık uçlu test maddeleri için ise iç tutarlılık ve gözlemciler arası tutarlılık yöntemleri ile sağlanmıştır. Testin geçerlik, güvenirlik ve madde analizi sonuçlarına göre BİSBET’in fen ve teknoloji dersi öğretmen adaylarının BSB’lerinin ölçülmesi amacıyla kullanılabilecek, geçerliği ve güvenirliği sağlanmış bir test olduğu sonucuna ulaşılmıştır.Item Open Access Improving chip multiprocessor reliability through code replication(Pergamon Press, 2010) Ozturk, O.Chip multiprocessors (CMPs) are promising candidates for the next generation computing platforms to utilize large numbers of gates and reduce the effects of high interconnect delays. One of the key challenges in CMP design is to balance out the often-conflicting demands. Specifically, for today's image/video applications and systems, power consumption, memory space occupancy, area cost, and reliability are as important as performance. Therefore, a compilation framework for CMPs should consider multiple factors during the optimization process. Motivated by this observation, this paper addresses the energy-aware reliability support for the CMP architectures, targeting in particular at array-intensive image/video applications. There are two main goals behind our compiler approach. First, we want to minimize the energy wasted in executing replicas when there is no error during execution (which should be the most frequent case in practice). Second, we want to minimize the time to recover (through the replicas) from an error when it occurs. This approach has been implemented and tested using four parallel array-based applications from the image/video processing domain. Our experimental evaluation indicates that the proposed approach saves significant energy over the case when all the replicas are run under the highest voltage/frequency level, without sacrificing any reliability over the latter. © 2009 Elsevier Ltd. All rights reserved.Item Open Access Instruction-level reliability improvement for embedded systems(IEEE, 2020-09) Tekgül, Hakan; Öztürk, ÖzcanWith the increasing number of applications in embedded computing systems, it became indispensable for the system designers to consider multiple objectives including power, performance, and reliability. Among these, reliability is a bigger constraint for safety critical applications. For example, fault tolerance of transportation systems has become very critical with the use of many embedded on-board devices. There are many techniques proposed in the past decade to increase the fault tolerance of such systems. However, many of these techniques come with a significant overhead, which make them infeasible in most of the embedded execution scenarios. Motivated by this observation, our main contribution in this paper is to propose and evaluate an instruction criticality based reliable source code generation algorithm. Specifically, we propose an instruction ranking formula based on our detailed fault injection experiments. We use instruction rankings along with the overhead tolerance limits and generate a source code with increased fault tolerance. The primary goal behind this work is to improve reliability of an application while keeping the performance effects minimal. We apply state-of-the-art reliability techniques to evaluate our approach on a set of benchmarks. Our experimental results show that, the proposed approach achieves up to 8% decrease in error rates with only 10% performance overhead. The error rates further decrease with higher overhead tolerances.
- «
- 1 (current)
- 2
- 3
- »